2025-06-22 19:08:33.825658 | Job console starting 2025-06-22 19:08:33.862123 | Updating git repos 2025-06-22 19:08:33.897256 | Cloning repos into workspace 2025-06-22 19:08:34.077907 | Restoring repo states 2025-06-22 19:08:34.094015 | Merging changes 2025-06-22 19:08:34.094036 | Checking out repos 2025-06-22 19:08:34.374712 | Preparing playbooks 2025-06-22 19:08:35.058469 | Running Ansible setup 2025-06-22 19:08:39.345051 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-06-22 19:08:40.043420 | 2025-06-22 19:08:40.043542 | PLAY [Base pre] 2025-06-22 19:08:40.059357 | 2025-06-22 19:08:40.059461 | TASK [Setup log path fact] 2025-06-22 19:08:40.078786 | orchestrator | ok 2025-06-22 19:08:40.095054 | 2025-06-22 19:08:40.095162 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-06-22 19:08:40.134209 | orchestrator | ok 2025-06-22 19:08:40.145498 | 2025-06-22 19:08:40.145595 | TASK [emit-job-header : Print job information] 2025-06-22 19:08:40.202285 | # Job Information 2025-06-22 19:08:40.202547 | Ansible Version: 2.16.14 2025-06-22 19:08:40.202608 | Job: testbed-deploy-stable-in-a-nutshell-ubuntu-24.04 2025-06-22 19:08:40.202669 | Pipeline: post 2025-06-22 19:08:40.202710 | Executor: 521e9411259a 2025-06-22 19:08:40.202747 | Triggered by: https://github.com/osism/testbed/commit/18778fb5188c17e12df2cbfca8eeddeff314e785 2025-06-22 19:08:40.202785 | Event ID: 46e49494-4f9c-11f0-8fdb-d9b8f50935e9 2025-06-22 19:08:40.212249 | 2025-06-22 19:08:40.212377 | LOOP [emit-job-header : Print node information] 2025-06-22 19:08:40.325370 | orchestrator | ok: 2025-06-22 19:08:40.325676 | orchestrator | # Node Information 2025-06-22 19:08:40.325735 | orchestrator | Inventory Hostname: orchestrator 2025-06-22 19:08:40.325779 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-06-22 19:08:40.325816 | orchestrator | Username: zuul-testbed04 2025-06-22 19:08:40.325852 | orchestrator | Distro: Debian 12.11 2025-06-22 19:08:40.325893 | orchestrator | Provider: static-testbed 2025-06-22 19:08:40.325929 | orchestrator | Region: 2025-06-22 19:08:40.325964 | orchestrator | Label: testbed-orchestrator 2025-06-22 19:08:40.325998 | orchestrator | Product Name: OpenStack Nova 2025-06-22 19:08:40.326031 | orchestrator | Interface IP: 81.163.193.140 2025-06-22 19:08:40.342528 | 2025-06-22 19:08:40.342641 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-06-22 19:08:40.770371 | orchestrator -> localhost | changed 2025-06-22 19:08:40.777926 | 2025-06-22 19:08:40.778024 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-06-22 19:08:41.734744 | orchestrator -> localhost | changed 2025-06-22 19:08:41.747790 | 2025-06-22 19:08:41.747889 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-06-22 19:08:42.003657 | orchestrator -> localhost | ok 2025-06-22 19:08:42.010405 | 2025-06-22 19:08:42.010513 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-06-22 19:08:42.048614 | orchestrator | ok 2025-06-22 19:08:42.071665 | orchestrator | included: /var/lib/zuul/builds/e73a28ae78f04a178dd960d15158097f/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-06-22 19:08:42.079227 | 2025-06-22 19:08:42.079318 | TASK [add-build-sshkey : Create Temp SSH key] 2025-06-22 19:08:43.084252 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-06-22 19:08:43.084455 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/e73a28ae78f04a178dd960d15158097f/work/e73a28ae78f04a178dd960d15158097f_id_rsa 2025-06-22 19:08:43.084495 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/e73a28ae78f04a178dd960d15158097f/work/e73a28ae78f04a178dd960d15158097f_id_rsa.pub 2025-06-22 19:08:43.084527 | orchestrator -> localhost | The key fingerprint is: 2025-06-22 19:08:43.084556 | orchestrator -> localhost | SHA256:Q7wkltWgw4od4fKBTA7UXHU8QK7kntyKBM56Sz4l9fM zuul-build-sshkey 2025-06-22 19:08:43.084579 | orchestrator -> localhost | The key's randomart image is: 2025-06-22 19:08:43.084608 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-06-22 19:08:43.084631 | orchestrator -> localhost | |o.+ o.o+=+ | 2025-06-22 19:08:43.084653 | orchestrator -> localhost | | = = o.=.o. | 2025-06-22 19:08:43.084673 | orchestrator -> localhost | | = +.B.+ . | 2025-06-22 19:08:43.084694 | orchestrator -> localhost | | =+=.= . | 2025-06-22 19:08:43.084715 | orchestrator -> localhost | | ...++ S | 2025-06-22 19:08:43.084740 | orchestrator -> localhost | |o o + = . | 2025-06-22 19:08:43.084761 | orchestrator -> localhost | | o.+ + + | 2025-06-22 19:08:43.084782 | orchestrator -> localhost | |.o+ . . E | 2025-06-22 19:08:43.084805 | orchestrator -> localhost | |..o+ . | 2025-06-22 19:08:43.084826 | orchestrator -> localhost | +----[SHA256]-----+ 2025-06-22 19:08:43.084881 | orchestrator -> localhost | ok: Runtime: 0:00:00.586264 2025-06-22 19:08:43.097368 | 2025-06-22 19:08:43.097467 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-06-22 19:08:43.118032 | orchestrator | ok 2025-06-22 19:08:43.131726 | orchestrator | included: /var/lib/zuul/builds/e73a28ae78f04a178dd960d15158097f/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-06-22 19:08:43.140629 | 2025-06-22 19:08:43.140721 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-06-22 19:08:43.164921 | orchestrator | skipping: Conditional result was False 2025-06-22 19:08:43.172251 | 2025-06-22 19:08:43.172345 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-06-22 19:08:43.717064 | orchestrator | changed 2025-06-22 19:08:43.725863 | 2025-06-22 19:08:43.725974 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-06-22 19:08:43.994572 | orchestrator | ok 2025-06-22 19:08:44.004565 | 2025-06-22 19:08:44.004705 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-06-22 19:08:44.386634 | orchestrator | ok 2025-06-22 19:08:44.392878 | 2025-06-22 19:08:44.392997 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-06-22 19:08:44.783103 | orchestrator | ok 2025-06-22 19:08:44.790377 | 2025-06-22 19:08:44.790493 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-06-22 19:08:44.814459 | orchestrator | skipping: Conditional result was False 2025-06-22 19:08:44.822511 | 2025-06-22 19:08:44.822622 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-06-22 19:08:45.332969 | orchestrator -> localhost | changed 2025-06-22 19:08:45.355873 | 2025-06-22 19:08:45.356009 | TASK [add-build-sshkey : Add back temp key] 2025-06-22 19:08:45.736746 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/e73a28ae78f04a178dd960d15158097f/work/e73a28ae78f04a178dd960d15158097f_id_rsa (zuul-build-sshkey) 2025-06-22 19:08:45.737089 | orchestrator -> localhost | ok: Runtime: 0:00:00.020682 2025-06-22 19:08:45.747443 | 2025-06-22 19:08:45.747598 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-06-22 19:08:46.204050 | orchestrator | ok 2025-06-22 19:08:46.215473 | 2025-06-22 19:08:46.215633 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-06-22 19:08:46.243080 | orchestrator | skipping: Conditional result was False 2025-06-22 19:08:46.300594 | 2025-06-22 19:08:46.300734 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-06-22 19:08:46.736933 | orchestrator | ok 2025-06-22 19:08:46.752487 | 2025-06-22 19:08:46.752629 | TASK [validate-host : Define zuul_info_dir fact] 2025-06-22 19:08:46.799434 | orchestrator | ok 2025-06-22 19:08:46.809923 | 2025-06-22 19:08:46.810052 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-06-22 19:08:47.103885 | orchestrator -> localhost | ok 2025-06-22 19:08:47.119801 | 2025-06-22 19:08:47.119952 | TASK [validate-host : Collect information about the host] 2025-06-22 19:08:48.358812 | orchestrator | ok 2025-06-22 19:08:48.374518 | 2025-06-22 19:08:48.374650 | TASK [validate-host : Sanitize hostname] 2025-06-22 19:08:48.433993 | orchestrator | ok 2025-06-22 19:08:48.439909 | 2025-06-22 19:08:48.440024 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-06-22 19:08:49.031729 | orchestrator -> localhost | changed 2025-06-22 19:08:49.044552 | 2025-06-22 19:08:49.044729 | TASK [validate-host : Collect information about zuul worker] 2025-06-22 19:08:49.489756 | orchestrator | ok 2025-06-22 19:08:49.499421 | 2025-06-22 19:08:49.499574 | TASK [validate-host : Write out all zuul information for each host] 2025-06-22 19:08:50.088143 | orchestrator -> localhost | changed 2025-06-22 19:08:50.099033 | 2025-06-22 19:08:50.099148 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-06-22 19:08:50.375717 | orchestrator | ok 2025-06-22 19:08:50.382234 | 2025-06-22 19:08:50.382355 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-06-22 19:09:30.150946 | orchestrator | changed: 2025-06-22 19:09:30.151228 | orchestrator | .d..t...... src/ 2025-06-22 19:09:30.151269 | orchestrator | .d..t...... src/github.com/ 2025-06-22 19:09:30.151296 | orchestrator | .d..t...... src/github.com/osism/ 2025-06-22 19:09:30.151318 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-06-22 19:09:30.151339 | orchestrator | RedHat.yml 2025-06-22 19:09:30.162993 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-06-22 19:09:30.163011 | orchestrator | RedHat.yml 2025-06-22 19:09:30.163065 | orchestrator | = 2.2.0"... 2025-06-22 19:09:43.350468 | orchestrator | 19:09:43.350 STDOUT terraform: - Finding latest version of hashicorp/null... 2025-06-22 19:09:43.438885 | orchestrator | 19:09:43.438 STDOUT terraform: - Finding terraform-provider-openstack/openstack versions matching ">= 1.53.0"... 2025-06-22 19:09:44.491139 | orchestrator | 19:09:44.490 STDOUT terraform: - Installing hashicorp/local v2.5.3... 2025-06-22 19:09:45.651848 | orchestrator | 19:09:45.651 STDOUT terraform: - Installed hashicorp/local v2.5.3 (signed, key ID 0C0AF313E5FD9F80) 2025-06-22 19:09:46.640277 | orchestrator | 19:09:46.640 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-06-22 19:09:47.580013 | orchestrator | 19:09:47.579 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-06-22 19:09:48.860146 | orchestrator | 19:09:48.859 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.2.0... 2025-06-22 19:09:50.029043 | orchestrator | 19:09:50.028 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.2.0 (signed, key ID 4F80527A391BEFD2) 2025-06-22 19:09:50.029102 | orchestrator | 19:09:50.029 STDOUT terraform: Providers are signed by their developers. 2025-06-22 19:09:50.029133 | orchestrator | 19:09:50.029 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-06-22 19:09:50.029208 | orchestrator | 19:09:50.029 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-06-22 19:09:50.029282 | orchestrator | 19:09:50.029 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-06-22 19:09:50.029346 | orchestrator | 19:09:50.029 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-06-22 19:09:50.029409 | orchestrator | 19:09:50.029 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-06-22 19:09:50.029427 | orchestrator | 19:09:50.029 STDOUT terraform: you run "tofu init" in the future. 2025-06-22 19:09:50.030484 | orchestrator | 19:09:50.030 STDOUT terraform: OpenTofu has been successfully initialized! 2025-06-22 19:09:50.030531 | orchestrator | 19:09:50.030 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-06-22 19:09:50.030538 | orchestrator | 19:09:50.030 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-06-22 19:09:50.030542 | orchestrator | 19:09:50.030 STDOUT terraform: should now work. 2025-06-22 19:09:50.030546 | orchestrator | 19:09:50.030 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-06-22 19:09:50.030550 | orchestrator | 19:09:50.030 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-06-22 19:09:50.030555 | orchestrator | 19:09:50.030 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-06-22 19:09:50.173895 | orchestrator | 19:09:50.173 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed04/terraform` instead. 2025-06-22 19:09:50.174260 | orchestrator | 19:09:50.173 WARN  The `workspace` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- workspace` instead. 2025-06-22 19:09:50.397202 | orchestrator | 19:09:50.397 STDOUT terraform: Created and switched to workspace "ci"! 2025-06-22 19:09:50.397273 | orchestrator | 19:09:50.397 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-06-22 19:09:50.397302 | orchestrator | 19:09:50.397 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-06-22 19:09:50.397310 | orchestrator | 19:09:50.397 STDOUT terraform: for this configuration. 2025-06-22 19:09:50.586497 | orchestrator | 19:09:50.586 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed04/terraform` instead. 2025-06-22 19:09:50.586566 | orchestrator | 19:09:50.586 WARN  The `fmt` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- fmt` instead. 2025-06-22 19:09:50.682206 | orchestrator | 19:09:50.681 STDOUT terraform: ci.auto.tfvars 2025-06-22 19:09:50.694108 | orchestrator | 19:09:50.693 STDOUT terraform: default_custom.tf 2025-06-22 19:09:50.864055 | orchestrator | 19:09:50.863 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed04/terraform` instead. 2025-06-22 19:09:51.854544 | orchestrator | 19:09:51.854 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-06-22 19:09:52.397666 | orchestrator | 19:09:52.397 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-06-22 19:09:52.703859 | orchestrator | 19:09:52.703 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-06-22 19:09:52.703958 | orchestrator | 19:09:52.703 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-06-22 19:09:52.703965 | orchestrator | 19:09:52.703 STDOUT terraform:  + create 2025-06-22 19:09:52.703971 | orchestrator | 19:09:52.703 STDOUT terraform:  <= read (data resources) 2025-06-22 19:09:52.703977 | orchestrator | 19:09:52.703 STDOUT terraform: OpenTofu will perform the following actions: 2025-06-22 19:09:52.703981 | orchestrator | 19:09:52.703 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-06-22 19:09:52.703985 | orchestrator | 19:09:52.703 STDOUT terraform:  # (config refers to values not yet known) 2025-06-22 19:09:52.703992 | orchestrator | 19:09:52.703 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-06-22 19:09:52.704014 | orchestrator | 19:09:52.703 STDOUT terraform:  + checksum = (known after apply) 2025-06-22 19:09:52.704046 | orchestrator | 19:09:52.704 STDOUT terraform:  + created_at = (known after apply) 2025-06-22 19:09:52.704086 | orchestrator | 19:09:52.704 STDOUT terraform:  + file = (known after apply) 2025-06-22 19:09:52.704109 | orchestrator | 19:09:52.704 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.704233 | orchestrator | 19:09:52.704 STDOUT terraform:  + metadata = (known after apply) 2025-06-22 19:09:52.704270 | orchestrator | 19:09:52.704 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-06-22 19:09:52.704293 | orchestrator | 19:09:52.704 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-06-22 19:09:52.704316 | orchestrator | 19:09:52.704 STDOUT terraform:  + most_recent = true 2025-06-22 19:09:52.704357 | orchestrator | 19:09:52.704 STDOUT terraform:  + name = (known after apply) 2025-06-22 19:09:52.704379 | orchestrator | 19:09:52.704 STDOUT terraform:  + protected = (known after apply) 2025-06-22 19:09:52.704414 | orchestrator | 19:09:52.704 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.704450 | orchestrator | 19:09:52.704 STDOUT terraform:  + schema = (known after apply) 2025-06-22 19:09:52.704474 | orchestrator | 19:09:52.704 STDOUT terraform:  + size_bytes = (known after apply) 2025-06-22 19:09:52.704506 | orchestrator | 19:09:52.704 STDOUT terraform:  + tags = (known after apply) 2025-06-22 19:09:52.704544 | orchestrator | 19:09:52.704 STDOUT terraform:  + updated_at = (known after apply) 2025-06-22 19:09:52.704550 | orchestrator | 19:09:52.704 STDOUT terraform:  } 2025-06-22 19:09:52.704598 | orchestrator | 19:09:52.704 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-06-22 19:09:52.704638 | orchestrator | 19:09:52.704 STDOUT terraform:  # (config refers to values not yet known) 2025-06-22 19:09:52.704664 | orchestrator | 19:09:52.704 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-06-22 19:09:52.704702 | orchestrator | 19:09:52.704 STDOUT terraform:  + checksum = (known after apply) 2025-06-22 19:09:52.704739 | orchestrator | 19:09:52.704 STDOUT terraform:  + created_at = (known after apply) 2025-06-22 19:09:52.704762 | orchestrator | 19:09:52.704 STDOUT terraform:  + file = (known after apply) 2025-06-22 19:09:52.704793 | orchestrator | 19:09:52.704 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.704813 | orchestrator | 19:09:52.704 STDOUT terraform:  + metadata = (known after apply) 2025-06-22 19:09:52.704851 | orchestrator | 19:09:52.704 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-06-22 19:09:52.704881 | orchestrator | 19:09:52.704 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-06-22 19:09:52.704905 | orchestrator | 19:09:52.704 STDOUT terraform:  + most_recent = true 2025-06-22 19:09:52.704927 | orchestrator | 19:09:52.704 STDOUT terraform:  + name = (known after apply) 2025-06-22 19:09:52.704961 | orchestrator | 19:09:52.704 STDOUT terraform:  + protected = (known after apply) 2025-06-22 19:09:52.704999 | orchestrator | 19:09:52.704 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.705021 | orchestrator | 19:09:52.704 STDOUT terraform:  + schema = (known after apply) 2025-06-22 19:09:52.705052 | orchestrator | 19:09:52.705 STDOUT terraform:  + size_bytes = (known after apply) 2025-06-22 19:09:52.705095 | orchestrator | 19:09:52.705 STDOUT terraform:  + tags = (known after apply) 2025-06-22 19:09:52.705102 | orchestrator | 19:09:52.705 STDOUT terraform:  + updated_at = (known after apply) 2025-06-22 19:09:52.705125 | orchestrator | 19:09:52.705 STDOUT terraform:  } 2025-06-22 19:09:52.705154 | orchestrator | 19:09:52.705 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-06-22 19:09:52.705207 | orchestrator | 19:09:52.705 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-06-22 19:09:52.705235 | orchestrator | 19:09:52.705 STDOUT terraform:  + content = (known after apply) 2025-06-22 19:09:52.705319 | orchestrator | 19:09:52.705 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-06-22 19:09:52.705369 | orchestrator | 19:09:52.705 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-06-22 19:09:52.705394 | orchestrator | 19:09:52.705 STDOUT terraform:  + content_md5 = (known after apply) 2025-06-22 19:09:52.705440 | orchestrator | 19:09:52.705 STDOUT terraform:  + content_sha1 = (known after apply) 2025-06-22 19:09:52.705471 | orchestrator | 19:09:52.705 STDOUT terraform:  + content_sha256 = (known after apply) 2025-06-22 19:09:52.705508 | orchestrator | 19:09:52.705 STDOUT terraform:  + content_sha512 = (known after apply) 2025-06-22 19:09:52.705541 | orchestrator | 19:09:52.705 STDOUT terraform:  + directory_permission = "0777" 2025-06-22 19:09:52.705564 | orchestrator | 19:09:52.705 STDOUT terraform:  + file_permission = "0644" 2025-06-22 19:09:52.705612 | orchestrator | 19:09:52.705 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-06-22 19:09:52.705644 | orchestrator | 19:09:52.705 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.705651 | orchestrator | 19:09:52.705 STDOUT terraform:  } 2025-06-22 19:09:52.705682 | orchestrator | 19:09:52.705 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-06-22 19:09:52.705728 | orchestrator | 19:09:52.705 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-06-22 19:09:52.705767 | orchestrator | 19:09:52.705 STDOUT terraform:  + content = (known after apply) 2025-06-22 19:09:52.705805 | orchestrator | 19:09:52.705 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-06-22 19:09:52.705840 | orchestrator | 19:09:52.705 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-06-22 19:09:52.705877 | orchestrator | 19:09:52.705 STDOUT terraform:  + content_md5 = (known after apply) 2025-06-22 19:09:52.705915 | orchestrator | 19:09:52.705 STDOUT terraform:  + content_sha1 = (known after apply) 2025-06-22 19:09:52.705956 | orchestrator | 19:09:52.705 STDOUT terraform:  + content_sha256 = (known after apply) 2025-06-22 19:09:52.705991 | orchestrator | 19:09:52.705 STDOUT terraform:  + content_sha512 = (known after apply) 2025-06-22 19:09:52.706034 | orchestrator | 19:09:52.705 STDOUT terraform:  + directory_permission = "0777" 2025-06-22 19:09:52.706063 | orchestrator | 19:09:52.706 STDOUT terraform:  + file_permission = "0644" 2025-06-22 19:09:52.706096 | orchestrator | 19:09:52.706 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-06-22 19:09:52.706139 | orchestrator | 19:09:52.706 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.706147 | orchestrator | 19:09:52.706 STDOUT terraform:  } 2025-06-22 19:09:52.706188 | orchestrator | 19:09:52.706 STDOUT terraform:  # local_file.inventory will be created 2025-06-22 19:09:52.706227 | orchestrator | 19:09:52.706 STDOUT terraform:  + resource "local_file" "inventory" { 2025-06-22 19:09:52.706264 | orchestrator | 19:09:52.706 STDOUT terraform:  + content = (known after apply) 2025-06-22 19:09:52.706300 | orchestrator | 19:09:52.706 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-06-22 19:09:52.706339 | orchestrator | 19:09:52.706 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-06-22 19:09:52.706430 | orchestrator | 19:09:52.706 STDOUT terraform:  + content_md5 = (known after apply) 2025-06-22 19:09:52.706472 | orchestrator | 19:09:52.706 STDOUT terraform:  + content_sha1 = (known after apply) 2025-06-22 19:09:52.706509 | orchestrator | 19:09:52.706 STDOUT terraform:  + content_sha256 = (known after apply) 2025-06-22 19:09:52.706547 | orchestrator | 19:09:52.706 STDOUT terraform:  + content_sha512 = (known after apply) 2025-06-22 19:09:52.706572 | orchestrator | 19:09:52.706 STDOUT terraform:  + directory_permission = "0777" 2025-06-22 19:09:52.706609 | orchestrator | 19:09:52.706 STDOUT terraform:  + file_permission = "0644" 2025-06-22 19:09:52.706632 | orchestrator | 19:09:52.706 STDOUT terraform:  + filename = "inventory.ci" 2025-06-22 19:09:52.706671 | orchestrator | 19:09:52.706 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.706679 | orchestrator | 19:09:52.706 STDOUT terraform:  } 2025-06-22 19:09:52.706742 | orchestrator | 19:09:52.706 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-06-22 19:09:52.706781 | orchestrator | 19:09:52.706 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-06-22 19:09:52.706810 | orchestrator | 19:09:52.706 STDOUT terraform:  + content = (sensitive value) 2025-06-22 19:09:52.706845 | orchestrator | 19:09:52.706 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-06-22 19:09:52.706882 | orchestrator | 19:09:52.706 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-06-22 19:09:52.706921 | orchestrator | 19:09:52.706 STDOUT terraform:  + content_md5 = (known after apply) 2025-06-22 19:09:52.706960 | orchestrator | 19:09:52.706 STDOUT terraform:  + content_sha1 = (known after apply) 2025-06-22 19:09:52.706994 | orchestrator | 19:09:52.706 STDOUT terraform:  + content_sha256 = (known after apply) 2025-06-22 19:09:52.707040 | orchestrator | 19:09:52.706 STDOUT terraform:  + content_sha512 = (known after apply) 2025-06-22 19:09:52.707066 | orchestrator | 19:09:52.707 STDOUT terraform:  + directory_permission = "0700" 2025-06-22 19:09:52.707093 | orchestrator | 19:09:52.707 STDOUT terraform:  + file_permission = "0600" 2025-06-22 19:09:52.707123 | orchestrator | 19:09:52.707 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-06-22 19:09:52.707214 | orchestrator | 19:09:52.707 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.707226 | orchestrator | 19:09:52.707 STDOUT terraform:  } 2025-06-22 19:09:52.707265 | orchestrator | 19:09:52.707 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-06-22 19:09:52.707287 | orchestrator | 19:09:52.707 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-06-22 19:09:52.707317 | orchestrator | 19:09:52.707 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.707325 | orchestrator | 19:09:52.707 STDOUT terraform:  } 2025-06-22 19:09:52.707390 | orchestrator | 19:09:52.707 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-06-22 19:09:52.707455 | orchestrator | 19:09:52.707 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-06-22 19:09:52.707530 | orchestrator | 19:09:52.707 STDOUT terraform:  + attachment = (known after apply) 2025-06-22 19:09:52.707547 | orchestrator | 19:09:52.707 STDOUT terraform:  + availability_zone = "nova" 2025-06-22 19:09:52.707590 | orchestrator | 19:09:52.707 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.707658 | orchestrator | 19:09:52.707 STDOUT terraform:  + image_id = (known after apply) 2025-06-22 19:09:52.707693 | orchestrator | 19:09:52.707 STDOUT terraform:  + metadata = (known after apply) 2025-06-22 19:09:52.707749 | orchestrator | 19:09:52.707 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-06-22 19:09:52.707781 | orchestrator | 19:09:52.707 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.707807 | orchestrator | 19:09:52.707 STDOUT terraform:  + size = 80 2025-06-22 19:09:52.707842 | orchestrator | 19:09:52.707 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-22 19:09:52.707850 | orchestrator | 19:09:52.707 STDOUT terraform:  + volume_type = "ssd" 2025-06-22 19:09:52.707871 | orchestrator | 19:09:52.707 STDOUT terraform:  } 2025-06-22 19:09:52.707918 | orchestrator | 19:09:52.707 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-06-22 19:09:52.707967 | orchestrator | 19:09:52.707 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-22 19:09:52.708003 | orchestrator | 19:09:52.707 STDOUT terraform:  + attachment = (known after apply) 2025-06-22 19:09:52.708022 | orchestrator | 19:09:52.707 STDOUT terraform:  + availability_zone = "nova" 2025-06-22 19:09:52.708059 | orchestrator | 19:09:52.708 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.708108 | orchestrator | 19:09:52.708 STDOUT terraform:  + image_id = (known after apply) 2025-06-22 19:09:52.708136 | orchestrator | 19:09:52.708 STDOUT terraform:  + metadata = (known after apply) 2025-06-22 19:09:52.708208 | orchestrator | 19:09:52.708 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-06-22 19:09:52.708246 | orchestrator | 19:09:52.708 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.708270 | orchestrator | 19:09:52.708 STDOUT terraform:  + size = 80 2025-06-22 19:09:52.708309 | orchestrator | 19:09:52.708 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-22 19:09:52.708317 | orchestrator | 19:09:52.708 STDOUT terraform:  + volume_type = "ssd" 2025-06-22 19:09:52.708325 | orchestrator | 19:09:52.708 STDOUT terraform:  } 2025-06-22 19:09:52.708389 | orchestrator | 19:09:52.708 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-06-22 19:09:52.708445 | orchestrator | 19:09:52.708 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-22 19:09:52.708474 | orchestrator | 19:09:52.708 STDOUT terraform:  + attachment = (known after apply) 2025-06-22 19:09:52.708488 | orchestrator | 19:09:52.708 STDOUT terraform:  + availability_zone = "nova" 2025-06-22 19:09:52.708530 | orchestrator | 19:09:52.708 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.708613 | orchestrator | 19:09:52.708 STDOUT terraform:  + image_id = (known after apply) 2025-06-22 19:09:52.708653 | orchestrator | 19:09:52.708 STDOUT terraform:  + metadata = (known after apply) 2025-06-22 19:09:52.708701 | orchestrator | 19:09:52.708 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-06-22 19:09:52.708738 | orchestrator | 19:09:52.708 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.708761 | orchestrator | 19:09:52.708 STDOUT terraform:  + size = 80 2025-06-22 19:09:52.708788 | orchestrator | 19:09:52.708 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-22 19:09:52.708814 | orchestrator | 19:09:52.708 STDOUT terraform:  + volume_type = "ssd" 2025-06-22 19:09:52.708821 | orchestrator | 19:09:52.708 STDOUT terraform:  } 2025-06-22 19:09:52.708870 | orchestrator | 19:09:52.708 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-06-22 19:09:52.708919 | orchestrator | 19:09:52.708 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-22 19:09:52.708958 | orchestrator | 19:09:52.708 STDOUT terraform:  + attachment = (known after apply) 2025-06-22 19:09:52.708986 | orchestrator | 19:09:52.708 STDOUT terraform:  + availability_zone = "nova" 2025-06-22 19:09:52.709019 | orchestrator | 19:09:52.708 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.709065 | orchestrator | 19:09:52.709 STDOUT terraform:  + image_id = (known after apply) 2025-06-22 19:09:52.709092 | orchestrator | 19:09:52.709 STDOUT terraform:  + metadata = (known after apply) 2025-06-22 19:09:52.709140 | orchestrator | 19:09:52.709 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-06-22 19:09:52.709188 | orchestrator | 19:09:52.709 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.709209 | orchestrator | 19:09:52.709 STDOUT terraform:  + size = 80 2025-06-22 19:09:52.709234 | orchestrator | 19:09:52.709 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-22 19:09:52.709261 | orchestrator | 19:09:52.709 STDOUT terraform:  + volume_type = "ssd" 2025-06-22 19:09:52.709268 | orchestrator | 19:09:52.709 STDOUT terraform:  } 2025-06-22 19:09:52.709317 | orchestrator | 19:09:52.709 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-06-22 19:09:52.709362 | orchestrator | 19:09:52.709 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-22 19:09:52.709399 | orchestrator | 19:09:52.709 STDOUT terraform:  + attachment = (known after apply) 2025-06-22 19:09:52.709423 | orchestrator | 19:09:52.709 STDOUT terraform:  + availability_zone = "nova" 2025-06-22 19:09:52.709461 | orchestrator | 19:09:52.709 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.709496 | orchestrator | 19:09:52.709 STDOUT terraform:  + image_id = (known after apply) 2025-06-22 19:09:52.709532 | orchestrator | 19:09:52.709 STDOUT terraform:  + metadata = (known after apply) 2025-06-22 19:09:52.709576 | orchestrator | 19:09:52.709 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-06-22 19:09:52.709615 | orchestrator | 19:09:52.709 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.709677 | orchestrator | 19:09:52.709 STDOUT terraform:  + size = 80 2025-06-22 19:09:52.709704 | orchestrator | 19:09:52.709 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-22 19:09:52.709729 | orchestrator | 19:09:52.709 STDOUT terraform:  + volume_type = "ssd" 2025-06-22 19:09:52.709737 | orchestrator | 19:09:52.709 STDOUT terraform:  } 2025-06-22 19:09:52.709790 | orchestrator | 19:09:52.709 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-06-22 19:09:52.709838 | orchestrator | 19:09:52.709 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-22 19:09:52.709870 | orchestrator | 19:09:52.709 STDOUT terraform:  + attachment = (known after apply) 2025-06-22 19:09:52.709899 | orchestrator | 19:09:52.709 STDOUT terraform:  + availability_zone = "nova" 2025-06-22 19:09:52.709938 | orchestrator | 19:09:52.709 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.709976 | orchestrator | 19:09:52.709 STDOUT terraform:  + image_id = (known after apply) 2025-06-22 19:09:52.710034 | orchestrator | 19:09:52.709 STDOUT terraform:  + metadata = (known after apply) 2025-06-22 19:09:52.710075 | orchestrator | 19:09:52.710 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-06-22 19:09:52.710111 | orchestrator | 19:09:52.710 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.710142 | orchestrator | 19:09:52.710 STDOUT terraform:  + size = 80 2025-06-22 19:09:52.710199 | orchestrator | 19:09:52.710 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-22 19:09:52.710211 | orchestrator | 19:09:52.710 STDOUT terraform:  + volume_type = "ssd" 2025-06-22 19:09:52.710231 | orchestrator | 19:09:52.710 STDOUT terraform:  } 2025-06-22 19:09:52.710279 | orchestrator | 19:09:52.710 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-06-22 19:09:52.710325 | orchestrator | 19:09:52.710 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-06-22 19:09:52.710363 | orchestrator | 19:09:52.710 STDOUT terraform:  + attachment = (known after apply) 2025-06-22 19:09:52.710389 | orchestrator | 19:09:52.710 STDOUT terraform:  + availability_zone = "nova" 2025-06-22 19:09:52.710426 | orchestrator | 19:09:52.710 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.710465 | orchestrator | 19:09:52.710 STDOUT terraform:  + image_id = (known after apply) 2025-06-22 19:09:52.710507 | orchestrator | 19:09:52.710 STDOUT terraform:  + metadata = (known after apply) 2025-06-22 19:09:52.710553 | orchestrator | 19:09:52.710 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-06-22 19:09:52.710592 | orchestrator | 19:09:52.710 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.710616 | orchestrator | 19:09:52.710 STDOUT terraform:  + size = 80 2025-06-22 19:09:52.710641 | orchestrator | 19:09:52.710 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-22 19:09:52.710666 | orchestrator | 19:09:52.710 STDOUT terraform:  + volume_type = "ssd" 2025-06-22 19:09:52.710673 | orchestrator | 19:09:52.710 STDOUT terraform:  } 2025-06-22 19:09:52.717757 | orchestrator | 19:09:52.710 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-06-22 19:09:52.717791 | orchestrator | 19:09:52.710 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-22 19:09:52.717802 | orchestrator | 19:09:52.710 STDOUT terraform:  + attachment = (known after apply) 2025-06-22 19:09:52.717807 | orchestrator | 19:09:52.710 STDOUT terraform:  + availability_zone = "nova" 2025-06-22 19:09:52.717811 | orchestrator | 19:09:52.710 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.717815 | orchestrator | 19:09:52.710 STDOUT terraform:  + metadata = (known after apply) 2025-06-22 19:09:52.717820 | orchestrator | 19:09:52.710 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-06-22 19:09:52.717824 | orchestrator | 19:09:52.710 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.717828 | orchestrator | 19:09:52.711 STDOUT terraform:  + size = 20 2025-06-22 19:09:52.717832 | orchestrator | 19:09:52.711 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-22 19:09:52.717836 | orchestrator | 19:09:52.711 STDOUT terraform:  + volume_type = "ssd" 2025-06-22 19:09:52.717839 | orchestrator | 19:09:52.711 STDOUT terraform:  } 2025-06-22 19:09:52.717843 | orchestrator | 19:09:52.711 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-06-22 19:09:52.717847 | orchestrator | 19:09:52.711 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-22 19:09:52.717851 | orchestrator | 19:09:52.711 STDOUT terraform:  + attachment = (known after apply) 2025-06-22 19:09:52.717855 | orchestrator | 19:09:52.711 STDOUT terraform:  + availability_zone = "nova" 2025-06-22 19:09:52.717858 | orchestrator | 19:09:52.711 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.717862 | orchestrator | 19:09:52.711 STDOUT terraform:  + metadata = (known after apply) 2025-06-22 19:09:52.717866 | orchestrator | 19:09:52.711 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-06-22 19:09:52.717869 | orchestrator | 19:09:52.711 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.717875 | orchestrator | 19:09:52.711 STDOUT terraform:  + size = 20 2025-06-22 19:09:52.717879 | orchestrator | 19:09:52.711 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-22 19:09:52.717882 | orchestrator | 19:09:52.711 STDOUT terraform:  + volume_type = "ssd" 2025-06-22 19:09:52.717886 | orchestrator | 19:09:52.711 STDOUT terraform:  } 2025-06-22 19:09:52.717890 | orchestrator | 19:09:52.711 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-06-22 19:09:52.717894 | orchestrator | 19:09:52.711 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-22 19:09:52.717898 | orchestrator | 19:09:52.711 STDOUT terraform:  + attachment = (known after apply) 2025-06-22 19:09:52.717910 | orchestrator | 19:09:52.711 STDOUT terraform:  + availability_zone = "nova" 2025-06-22 19:09:52.717913 | orchestrator | 19:09:52.711 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.717917 | orchestrator | 19:09:52.711 STDOUT terraform:  + metadata = (known after apply) 2025-06-22 19:09:52.717921 | orchestrator | 19:09:52.711 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-06-22 19:09:52.717925 | orchestrator | 19:09:52.711 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.717928 | orchestrator | 19:09:52.711 STDOUT terraform:  + size = 20 2025-06-22 19:09:52.717935 | orchestrator | 19:09:52.711 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-22 19:09:52.717939 | orchestrator | 19:09:52.711 STDOUT terraform:  + volume_type = "ssd" 2025-06-22 19:09:52.717942 | orchestrator | 19:09:52.711 STDOUT terraform:  } 2025-06-22 19:09:52.717946 | orchestrator | 19:09:52.711 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-06-22 19:09:52.717958 | orchestrator | 19:09:52.711 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-22 19:09:52.717962 | orchestrator | 19:09:52.711 STDOUT terraform:  + attachment = (known after apply) 2025-06-22 19:09:52.717966 | orchestrator | 19:09:52.711 STDOUT terraform:  + availability_zone = "nova" 2025-06-22 19:09:52.717969 | orchestrator | 19:09:52.711 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.717973 | orchestrator | 19:09:52.711 STDOUT terraform:  + metadata = (known after apply) 2025-06-22 19:09:52.717976 | orchestrator | 19:09:52.712 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-06-22 19:09:52.717980 | orchestrator | 19:09:52.712 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.717984 | orchestrator | 19:09:52.712 STDOUT terraform:  + size = 20 2025-06-22 19:09:52.717988 | orchestrator | 19:09:52.712 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-22 19:09:52.717991 | orchestrator | 19:09:52.712 STDOUT terraform:  + volume_type = "ssd" 2025-06-22 19:09:52.717995 | orchestrator | 19:09:52.712 STDOUT terraform:  } 2025-06-22 19:09:52.717999 | orchestrator | 19:09:52.712 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-06-22 19:09:52.718002 | orchestrator | 19:09:52.712 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-22 19:09:52.718006 | orchestrator | 19:09:52.712 STDOUT terraform:  + attachment = (known after apply) 2025-06-22 19:09:52.718010 | orchestrator | 19:09:52.712 STDOUT terraform:  + availability_zone = "nova" 2025-06-22 19:09:52.718029 | orchestrator | 19:09:52.712 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.718033 | orchestrator | 19:09:52.712 STDOUT terraform:  + metadata = (known after apply) 2025-06-22 19:09:52.718037 | orchestrator | 19:09:52.712 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-06-22 19:09:52.718040 | orchestrator | 19:09:52.712 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.718047 | orchestrator | 19:09:52.712 STDOUT terraform:  + size = 20 2025-06-22 19:09:52.718051 | orchestrator | 19:09:52.712 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-22 19:09:52.718055 | orchestrator | 19:09:52.712 STDOUT terraform:  + volume_type = "ssd" 2025-06-22 19:09:52.718059 | orchestrator | 19:09:52.712 STDOUT terraform:  } 2025-06-22 19:09:52.718062 | orchestrator | 19:09:52.712 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-06-22 19:09:52.718066 | orchestrator | 19:09:52.712 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-22 19:09:52.718070 | orchestrator | 19:09:52.712 STDOUT terraform:  + attachment = (known after apply) 2025-06-22 19:09:52.718073 | orchestrator | 19:09:52.712 STDOUT terraform:  + availability_zone = "nova" 2025-06-22 19:09:52.718077 | orchestrator | 19:09:52.712 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.718081 | orchestrator | 19:09:52.712 STDOUT terraform:  + metadata = (known after apply) 2025-06-22 19:09:52.718084 | orchestrator | 19:09:52.713 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-06-22 19:09:52.718088 | orchestrator | 19:09:52.714 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.718095 | orchestrator | 19:09:52.714 STDOUT terraform:  + size = 20 2025-06-22 19:09:52.718099 | orchestrator | 19:09:52.714 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-22 19:09:52.718103 | orchestrator | 19:09:52.714 STDOUT terraform:  + volume_type = "ssd" 2025-06-22 19:09:52.718107 | orchestrator | 19:09:52.714 STDOUT terraform:  } 2025-06-22 19:09:52.718110 | orchestrator | 19:09:52.714 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-06-22 19:09:52.718114 | orchestrator | 19:09:52.714 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-22 19:09:52.718122 | orchestrator | 19:09:52.714 STDOUT terraform:  + attachment = (known after apply) 2025-06-22 19:09:52.718126 | orchestrator | 19:09:52.714 STDOUT terraform:  + availability_zone = "nova" 2025-06-22 19:09:52.718129 | orchestrator | 19:09:52.714 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.718133 | orchestrator | 19:09:52.714 STDOUT terraform:  + metadata = (known after apply) 2025-06-22 19:09:52.718137 | orchestrator | 19:09:52.714 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-06-22 19:09:52.718141 | orchestrator | 19:09:52.714 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.718144 | orchestrator | 19:09:52.714 STDOUT terraform:  + size = 20 2025-06-22 19:09:52.718148 | orchestrator | 19:09:52.714 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-22 19:09:52.718152 | orchestrator | 19:09:52.714 STDOUT terraform:  + volume_type = "ssd" 2025-06-22 19:09:52.718155 | orchestrator | 19:09:52.714 STDOUT terraform:  } 2025-06-22 19:09:52.718159 | orchestrator | 19:09:52.714 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-06-22 19:09:52.718163 | orchestrator | 19:09:52.714 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-22 19:09:52.718198 | orchestrator | 19:09:52.714 STDOUT terraform:  + attachment = (known after apply) 2025-06-22 19:09:52.718202 | orchestrator | 19:09:52.714 STDOUT terraform:  + availability_zone = "nova" 2025-06-22 19:09:52.718206 | orchestrator | 19:09:52.714 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.718210 | orchestrator | 19:09:52.714 STDOUT terraform:  + metadata = (known after apply) 2025-06-22 19:09:52.718213 | orchestrator | 19:09:52.714 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-06-22 19:09:52.718217 | orchestrator | 19:09:52.714 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.718221 | orchestrator | 19:09:52.714 STDOUT terraform:  + size = 20 2025-06-22 19:09:52.718224 | orchestrator | 19:09:52.714 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-22 19:09:52.718228 | orchestrator | 19:09:52.714 STDOUT terraform:  + volume_type = "ssd" 2025-06-22 19:09:52.718232 | orchestrator | 19:09:52.714 STDOUT terraform:  } 2025-06-22 19:09:52.718235 | orchestrator | 19:09:52.714 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-06-22 19:09:52.718239 | orchestrator | 19:09:52.714 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-06-22 19:09:52.718243 | orchestrator | 19:09:52.714 STDOUT terraform:  + attachment = (known after apply) 2025-06-22 19:09:52.718246 | orchestrator | 19:09:52.715 STDOUT terraform:  + availability_zone = "nova" 2025-06-22 19:09:52.718250 | orchestrator | 19:09:52.715 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.718254 | orchestrator | 19:09:52.715 STDOUT terraform:  + metadata = (known after apply) 2025-06-22 19:09:52.718257 | orchestrator | 19:09:52.715 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-06-22 19:09:52.718261 | orchestrator | 19:09:52.715 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.718265 | orchestrator | 19:09:52.715 STDOUT terraform:  + size = 20 2025-06-22 19:09:52.718268 | orchestrator | 19:09:52.715 STDOUT terraform:  + volume_retype_policy = "never" 2025-06-22 19:09:52.718272 | orchestrator | 19:09:52.715 STDOUT terraform:  + volume_type = "ssd" 2025-06-22 19:09:52.718276 | orchestrator | 19:09:52.715 STDOUT terraform:  } 2025-06-22 19:09:52.718283 | orchestrator | 19:09:52.715 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-06-22 19:09:52.718286 | orchestrator | 19:09:52.715 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-06-22 19:09:52.718290 | orchestrator | 19:09:52.715 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-22 19:09:52.718298 | orchestrator | 19:09:52.715 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-22 19:09:52.718302 | orchestrator | 19:09:52.715 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-22 19:09:52.718306 | orchestrator | 19:09:52.715 STDOUT terraform:  + all_tags = (known after apply) 2025-06-22 19:09:52.718309 | orchestrator | 19:09:52.715 STDOUT terraform:  + availability_zone = "nova" 2025-06-22 19:09:52.718316 | orchestrator | 19:09:52.715 STDOUT terraform:  + config_drive = true 2025-06-22 19:09:52.718320 | orchestrator | 19:09:52.715 STDOUT terraform:  + created = (known after apply) 2025-06-22 19:09:52.718324 | orchestrator | 19:09:52.715 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-22 19:09:52.718328 | orchestrator | 19:09:52.715 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-06-22 19:09:52.718331 | orchestrator | 19:09:52.715 STDOUT terraform:  + force_delete = false 2025-06-22 19:09:52.718335 | orchestrator | 19:09:52.715 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-22 19:09:52.718338 | orchestrator | 19:09:52.715 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.718342 | orchestrator | 19:09:52.715 STDOUT terraform:  + image_id = (known after apply) 2025-06-22 19:09:52.718346 | orchestrator | 19:09:52.715 STDOUT terraform:  + image_name = (known after apply) 2025-06-22 19:09:52.718349 | orchestrator | 19:09:52.715 STDOUT terraform:  + key_pair = "testbed" 2025-06-22 19:09:52.718353 | orchestrator | 19:09:52.715 STDOUT terraform:  + name = "testbed-manager" 2025-06-22 19:09:52.718357 | orchestrator | 19:09:52.715 STDOUT terraform:  + power_state = "active" 2025-06-22 19:09:52.718360 | orchestrator | 19:09:52.715 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.718364 | orchestrator | 19:09:52.715 STDOUT terraform:  + security_groups = (known after apply) 2025-06-22 19:09:52.718368 | orchestrator | 19:09:52.715 STDOUT terraform:  + stop_before_destroy = false 2025-06-22 19:09:52.718371 | orchestrator | 19:09:52.715 STDOUT terraform:  + updated = (known after apply) 2025-06-22 19:09:52.718375 | orchestrator | 19:09:52.715 STDOUT terraform:  + user_data = (known after apply) 2025-06-22 19:09:52.718379 | orchestrator | 19:09:52.716 STDOUT terraform:  + block_device { 2025-06-22 19:09:52.718383 | orchestrator | 19:09:52.716 STDOUT terraform:  + boot_index = 0 2025-06-22 19:09:52.718386 | orchestrator | 19:09:52.716 STDOUT terraform:  + delete_on_termination = false 2025-06-22 19:09:52.718390 | orchestrator | 19:09:52.716 STDOUT terraform:  + destination_type = "volume" 2025-06-22 19:09:52.718394 | orchestrator | 19:09:52.716 STDOUT terraform:  + multiattach = false 2025-06-22 19:09:52.718397 | orchestrator | 19:09:52.716 STDOUT terraform:  + source_type = "volume" 2025-06-22 19:09:52.718401 | orchestrator | 19:09:52.716 STDOUT terraform:  + uuid = (known after apply) 2025-06-22 19:09:52.718405 | orchestrator | 19:09:52.716 STDOUT terraform:  } 2025-06-22 19:09:52.718409 | orchestrator | 19:09:52.716 STDOUT terraform:  + network { 2025-06-22 19:09:52.718412 | orchestrator | 19:09:52.716 STDOUT terraform:  + access_network = false 2025-06-22 19:09:52.718416 | orchestrator | 19:09:52.716 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-22 19:09:52.718420 | orchestrator | 19:09:52.716 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-22 19:09:52.718423 | orchestrator | 19:09:52.716 STDOUT terraform:  + mac = (known after apply) 2025-06-22 19:09:52.718431 | orchestrator | 19:09:52.716 STDOUT terraform:  + name = (known after apply) 2025-06-22 19:09:52.718435 | orchestrator | 19:09:52.716 STDOUT terraform:  + port = (known after apply) 2025-06-22 19:09:52.718438 | orchestrator | 19:09:52.716 STDOUT terraform:  + uuid = (known after apply) 2025-06-22 19:09:52.718442 | orchestrator | 19:09:52.716 STDOUT terraform:  } 2025-06-22 19:09:52.718446 | orchestrator | 19:09:52.716 STDOUT terraform:  } 2025-06-22 19:09:52.718457 | orchestrator | 19:09:52.716 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-06-22 19:09:52.718461 | orchestrator | 19:09:52.716 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-22 19:09:52.718464 | orchestrator | 19:09:52.716 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-22 19:09:52.718471 | orchestrator | 19:09:52.716 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-22 19:09:52.718475 | orchestrator | 19:09:52.716 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-22 19:09:52.718478 | orchestrator | 19:09:52.716 STDOUT terraform:  + all_tags = (known after apply) 2025-06-22 19:09:52.718482 | orchestrator | 19:09:52.716 STDOUT terraform:  + availability_zone = "nova" 2025-06-22 19:09:52.718486 | orchestrator | 19:09:52.716 STDOUT terraform:  + config_drive = true 2025-06-22 19:09:52.718490 | orchestrator | 19:09:52.716 STDOUT terraform:  + created = (known after apply) 2025-06-22 19:09:52.718493 | orchestrator | 19:09:52.716 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-22 19:09:52.718497 | orchestrator | 19:09:52.716 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-22 19:09:52.718501 | orchestrator | 19:09:52.716 STDOUT terraform:  + force_delete = false 2025-06-22 19:09:52.718504 | orchestrator | 19:09:52.716 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-22 19:09:52.718508 | orchestrator | 19:09:52.716 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.718512 | orchestrator | 19:09:52.716 STDOUT terraform:  + image_id = (known after apply) 2025-06-22 19:09:52.718515 | orchestrator | 19:09:52.716 STDOUT terraform:  + image_name = (known after apply) 2025-06-22 19:09:52.718519 | orchestrator | 19:09:52.716 STDOUT terraform:  + key_pair = "testbed" 2025-06-22 19:09:52.718523 | orchestrator | 19:09:52.716 STDOUT terraform:  + name = "testbed-node-0" 2025-06-22 19:09:52.718526 | orchestrator | 19:09:52.716 STDOUT terraform:  + power_state = "active" 2025-06-22 19:09:52.718530 | orchestrator | 19:09:52.716 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.718534 | orchestrator | 19:09:52.716 STDOUT terraform:  + security_groups = (known after apply) 2025-06-22 19:09:52.718537 | orchestrator | 19:09:52.717 STDOUT terraform:  + stop_before_destroy = false 2025-06-22 19:09:52.718541 | orchestrator | 19:09:52.717 STDOUT terraform:  + updated = (known after apply) 2025-06-22 19:09:52.718545 | orchestrator | 19:09:52.717 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-22 19:09:52.718549 | orchestrator | 19:09:52.717 STDOUT terraform:  + block_device { 2025-06-22 19:09:52.718559 | orchestrator | 19:09:52.717 STDOUT terraform:  + boot_index = 0 2025-06-22 19:09:52.718562 | orchestrator | 19:09:52.717 STDOUT terraform:  + delete_on_termination = false 2025-06-22 19:09:52.718566 | orchestrator | 19:09:52.717 STDOUT terraform:  + destination_type = "volume" 2025-06-22 19:09:52.718570 | orchestrator | 19:09:52.717 STDOUT terraform:  + multiattach = false 2025-06-22 19:09:52.718573 | orchestrator | 19:09:52.717 STDOUT terraform:  + source_type = "volume" 2025-06-22 19:09:52.718577 | orchestrator | 19:09:52.717 STDOUT terraform:  + uuid = (known after apply) 2025-06-22 19:09:52.718581 | orchestrator | 19:09:52.717 STDOUT terraform:  } 2025-06-22 19:09:52.718584 | orchestrator | 19:09:52.717 STDOUT terraform:  + network { 2025-06-22 19:09:52.718588 | orchestrator | 19:09:52.717 STDOUT terraform:  + access_network = false 2025-06-22 19:09:52.718592 | orchestrator | 19:09:52.717 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-22 19:09:52.718596 | orchestrator | 19:09:52.717 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-22 19:09:52.718599 | orchestrator | 19:09:52.717 STDOUT terraform:  + mac = (known after apply) 2025-06-22 19:09:52.718603 | orchestrator | 19:09:52.717 STDOUT terraform:  + name = (known after apply) 2025-06-22 19:09:52.718610 | orchestrator | 19:09:52.717 STDOUT terraform:  + port = (known after apply) 2025-06-22 19:09:52.718614 | orchestrator | 19:09:52.717 STDOUT terraform:  + uuid = (known after apply) 2025-06-22 19:09:52.718618 | orchestrator | 19:09:52.717 STDOUT terraform:  } 2025-06-22 19:09:52.718621 | orchestrator | 19:09:52.717 STDOUT terraform:  } 2025-06-22 19:09:52.718625 | orchestrator | 19:09:52.717 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-06-22 19:09:52.718629 | orchestrator | 19:09:52.717 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-22 19:09:52.718632 | orchestrator | 19:09:52.717 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-22 19:09:52.718636 | orchestrator | 19:09:52.717 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-22 19:09:52.718640 | orchestrator | 19:09:52.717 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-22 19:09:52.718644 | orchestrator | 19:09:52.717 STDOUT terraform:  + all_tags = (known after apply) 2025-06-22 19:09:52.718647 | orchestrator | 19:09:52.717 STDOUT terraform:  + availability_zone = "nova" 2025-06-22 19:09:52.718651 | orchestrator | 19:09:52.717 STDOUT terraform:  + config_drive = true 2025-06-22 19:09:52.718655 | orchestrator | 19:09:52.717 STDOUT terraform:  + created = (known after apply) 2025-06-22 19:09:52.718658 | orchestrator | 19:09:52.717 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-22 19:09:52.718662 | orchestrator | 19:09:52.717 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-22 19:09:52.718666 | orchestrator | 19:09:52.717 STDOUT terraform:  + force_delete = false 2025-06-22 19:09:52.718674 | orchestrator | 19:09:52.717 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-22 19:09:52.718681 | orchestrator | 19:09:52.717 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.718685 | orchestrator | 19:09:52.717 STDOUT terraform:  + image_id = (known after apply) 2025-06-22 19:09:52.718689 | orchestrator | 19:09:52.717 STDOUT terraform:  + image_name = (known after apply) 2025-06-22 19:09:52.718693 | orchestrator | 19:09:52.717 STDOUT terraform:  + key_pair = "testbed" 2025-06-22 19:09:52.718696 | orchestrator | 19:09:52.718 STDOUT terraform:  + name = "testbed-node-1" 2025-06-22 19:09:52.718700 | orchestrator | 19:09:52.718 STDOUT terraform:  + power_state = "active" 2025-06-22 19:09:52.718704 | orchestrator | 19:09:52.718 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.718707 | orchestrator | 19:09:52.718 STDOUT terraform:  + security_groups = (known after apply) 2025-06-22 19:09:52.718711 | orchestrator | 19:09:52.718 STDOUT terraform:  + stop_before_destroy = false 2025-06-22 19:09:52.718715 | orchestrator | 19:09:52.718 STDOUT terraform:  + updated = (known after apply) 2025-06-22 19:09:52.718718 | orchestrator | 19:09:52.718 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-22 19:09:52.718722 | orchestrator | 19:09:52.718 STDOUT terraform:  + block_device { 2025-06-22 19:09:52.718726 | orchestrator | 19:09:52.718 STDOUT terraform:  + boot_index = 0 2025-06-22 19:09:52.718732 | orchestrator | 19:09:52.718 STDOUT terraform:  + delete_on_termination = false 2025-06-22 19:09:52.718736 | orchestrator | 19:09:52.718 STDOUT terraform:  + destination_type = "volume" 2025-06-22 19:09:52.718739 | orchestrator | 19:09:52.718 STDOUT terraform:  + multiattach = false 2025-06-22 19:09:52.718743 | orchestrator | 19:09:52.718 STDOUT terraform:  + source_type = "volume" 2025-06-22 19:09:52.718747 | orchestrator | 19:09:52.718 STDOUT terraform:  + uuid = (known after apply) 2025-06-22 19:09:52.718750 | orchestrator | 19:09:52.718 STDOUT terraform:  } 2025-06-22 19:09:52.718754 | orchestrator | 19:09:52.718 STDOUT terraform:  + network { 2025-06-22 19:09:52.718758 | orchestrator | 19:09:52.718 STDOUT terraform:  + access_network = false 2025-06-22 19:09:52.718765 | orchestrator | 19:09:52.718 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-22 19:09:52.718769 | orchestrator | 19:09:52.718 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-22 19:09:52.718772 | orchestrator | 19:09:52.718 STDOUT terraform:  + mac = (known after apply) 2025-06-22 19:09:52.718776 | orchestrator | 19:09:52.718 STDOUT terraform:  + name = (known after apply) 2025-06-22 19:09:52.718780 | orchestrator | 19:09:52.718 STDOUT terraform:  + port = (known after apply) 2025-06-22 19:09:52.718783 | orchestrator | 19:09:52.718 STDOUT terraform:  + uuid = (known after apply) 2025-06-22 19:09:52.718787 | orchestrator | 19:09:52.718 STDOUT terraform:  } 2025-06-22 19:09:52.718791 | orchestrator | 19:09:52.718 STDOUT terraform:  } 2025-06-22 19:09:52.718794 | orchestrator | 19:09:52.718 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-06-22 19:09:52.718798 | orchestrator | 19:09:52.718 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-22 19:09:52.718808 | orchestrator | 19:09:52.718 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-22 19:09:52.718862 | orchestrator | 19:09:52.718 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-22 19:09:52.720606 | orchestrator | 19:09:52.718 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-22 19:09:52.720615 | orchestrator | 19:09:52.718 STDOUT terraform:  + all_tags = (known after apply) 2025-06-22 19:09:52.720619 | orchestrator | 19:09:52.718 STDOUT terraform:  + availability_zone = "nova" 2025-06-22 19:09:52.720622 | orchestrator | 19:09:52.719 STDOUT terraform:  + config_drive = true 2025-06-22 19:09:52.720626 | orchestrator | 19:09:52.719 STDOUT terraform:  + created = (known after apply) 2025-06-22 19:09:52.720630 | orchestrator | 19:09:52.719 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-22 19:09:52.720633 | orchestrator | 19:09:52.719 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-22 19:09:52.720637 | orchestrator | 19:09:52.719 STDOUT terraform:  + force_delete = false 2025-06-22 19:09:52.720641 | orchestrator | 19:09:52.719 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-22 19:09:52.720645 | orchestrator | 19:09:52.719 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.720648 | orchestrator | 19:09:52.719 STDOUT terraform:  + image_id = (known after apply) 2025-06-22 19:09:52.720652 | orchestrator | 19:09:52.719 STDOUT terraform:  + image_name = (known after apply) 2025-06-22 19:09:52.720656 | orchestrator | 19:09:52.719 STDOUT terraform:  + key_pair = "testbed" 2025-06-22 19:09:52.720659 | orchestrator | 19:09:52.719 STDOUT terraform:  + name = "testbed-node-2" 2025-06-22 19:09:52.720663 | orchestrator | 19:09:52.719 STDOUT terraform:  + power_state = "active" 2025-06-22 19:09:52.720667 | orchestrator | 19:09:52.719 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.720670 | orchestrator | 19:09:52.719 STDOUT terraform:  + security_groups = (known after apply) 2025-06-22 19:09:52.720674 | orchestrator | 19:09:52.719 STDOUT terraform:  + stop_before_destroy = false 2025-06-22 19:09:52.720677 | orchestrator | 19:09:52.719 STDOUT terraform:  + updated = (known after apply) 2025-06-22 19:09:52.720681 | orchestrator | 19:09:52.719 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-22 19:09:52.720685 | orchestrator | 19:09:52.719 STDOUT terraform:  + block_device { 2025-06-22 19:09:52.720689 | orchestrator | 19:09:52.719 STDOUT terraform:  + boot_index = 0 2025-06-22 19:09:52.720692 | orchestrator | 19:09:52.719 STDOUT terraform:  + delete_on_termination = false 2025-06-22 19:09:52.720699 | orchestrator | 19:09:52.719 STDOUT terraform:  + destination_type = "volume" 2025-06-22 19:09:52.720703 | orchestrator | 19:09:52.719 STDOUT terraform:  + multiattach = false 2025-06-22 19:09:52.720707 | orchestrator | 19:09:52.719 STDOUT terraform:  + source_type = "volume" 2025-06-22 19:09:52.720711 | orchestrator | 19:09:52.719 STDOUT terraform:  + uuid = (known after apply) 2025-06-22 19:09:52.720720 | orchestrator | 19:09:52.719 STDOUT terraform:  } 2025-06-22 19:09:52.720723 | orchestrator | 19:09:52.719 STDOUT terraform:  + network { 2025-06-22 19:09:52.720727 | orchestrator | 19:09:52.719 STDOUT terraform:  + access_network = false 2025-06-22 19:09:52.720731 | orchestrator | 19:09:52.719 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-22 19:09:52.720735 | orchestrator | 19:09:52.719 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-22 19:09:52.720738 | orchestrator | 19:09:52.719 STDOUT terraform:  + mac = (known after apply) 2025-06-22 19:09:52.720742 | orchestrator | 19:09:52.719 STDOUT terraform:  + name = (known after apply) 2025-06-22 19:09:52.720746 | orchestrator | 19:09:52.719 STDOUT terraform:  + port = (known after apply) 2025-06-22 19:09:52.720749 | orchestrator | 19:09:52.719 STDOUT terraform:  + uuid = (known after apply) 2025-06-22 19:09:52.720753 | orchestrator | 19:09:52.720 STDOUT terraform:  } 2025-06-22 19:09:52.720757 | orchestrator | 19:09:52.720 STDOUT terraform:  } 2025-06-22 19:09:52.720765 | orchestrator | 19:09:52.720 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-06-22 19:09:52.720769 | orchestrator | 19:09:52.720 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-22 19:09:52.720773 | orchestrator | 19:09:52.720 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-22 19:09:52.720777 | orchestrator | 19:09:52.720 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-22 19:09:52.720780 | orchestrator | 19:09:52.720 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-22 19:09:52.720784 | orchestrator | 19:09:52.720 STDOUT terraform:  + all_tags = (known after apply) 2025-06-22 19:09:52.720788 | orchestrator | 19:09:52.720 STDOUT terraform:  + availability_zone = "nova" 2025-06-22 19:09:52.720791 | orchestrator | 19:09:52.720 STDOUT terraform:  + config_drive = true 2025-06-22 19:09:52.720795 | orchestrator | 19:09:52.720 STDOUT terraform:  + created = (known after apply) 2025-06-22 19:09:52.720799 | orchestrator | 19:09:52.720 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-22 19:09:52.720802 | orchestrator | 19:09:52.720 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-22 19:09:52.720806 | orchestrator | 19:09:52.720 STDOUT terraform:  + force_delete = false 2025-06-22 19:09:52.720810 | orchestrator | 19:09:52.720 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-22 19:09:52.720813 | orchestrator | 19:09:52.720 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.720817 | orchestrator | 19:09:52.720 STDOUT terraform:  + image_id = (known after apply) 2025-06-22 19:09:52.720821 | orchestrator | 19:09:52.720 STDOUT terraform:  + image_name = (known after apply) 2025-06-22 19:09:52.720824 | orchestrator | 19:09:52.720 STDOUT terraform:  + key_pair = "testbed" 2025-06-22 19:09:52.720828 | orchestrator | 19:09:52.720 STDOUT terraform:  + name = "testbed-node-3" 2025-06-22 19:09:52.720832 | orchestrator | 19:09:52.720 STDOUT terraform:  + power_state = "active" 2025-06-22 19:09:52.720839 | orchestrator | 19:09:52.720 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.720843 | orchestrator | 19:09:52.720 STDOUT terraform:  + security_groups = (known after apply) 2025-06-22 19:09:52.720846 | orchestrator | 19:09:52.720 STDOUT terraform:  + stop_before_destroy = false 2025-06-22 19:09:52.720852 | orchestrator | 19:09:52.720 STDOUT terraform:  + updated = (known after apply) 2025-06-22 19:09:52.720856 | orchestrator | 19:09:52.720 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-22 19:09:52.720860 | orchestrator | 19:09:52.720 STDOUT terraform:  + block_device { 2025-06-22 19:09:52.720863 | orchestrator | 19:09:52.720 STDOUT terraform:  + boot_index = 0 2025-06-22 19:09:52.721112 | orchestrator | 19:09:52.720 STDOUT terraform:  + delete_on_termination = false 2025-06-22 19:09:52.721120 | orchestrator | 19:09:52.720 STDOUT terraform:  + destination_type = "volume" 2025-06-22 19:09:52.721124 | orchestrator | 19:09:52.720 STDOUT terraform:  + multiattach = false 2025-06-22 19:09:52.721128 | orchestrator | 19:09:52.720 STDOUT terraform:  + source_type = "volume" 2025-06-22 19:09:52.721131 | orchestrator | 19:09:52.720 STDOUT terraform:  + uuid = (known after apply) 2025-06-22 19:09:52.721135 | orchestrator | 19:09:52.720 STDOUT terraform:  } 2025-06-22 19:09:52.721139 | orchestrator | 19:09:52.721 STDOUT terraform:  + network { 2025-06-22 19:09:52.721142 | orchestrator | 19:09:52.721 STDOUT terraform:  + access_network = false 2025-06-22 19:09:52.721146 | orchestrator | 19:09:52.721 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-22 19:09:52.721150 | orchestrator | 19:09:52.721 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-22 19:09:52.721156 | orchestrator | 19:09:52.721 STDOUT terraform:  + mac = (known after apply) 2025-06-22 19:09:52.721160 | orchestrator | 19:09:52.721 STDOUT terraform:  + name = (known after apply) 2025-06-22 19:09:52.721436 | orchestrator | 19:09:52.721 STDOUT terraform:  + port = (known after apply) 2025-06-22 19:09:52.721444 | orchestrator | 19:09:52.721 STDOUT terraform:  + uuid = (known after apply) 2025-06-22 19:09:52.721447 | orchestrator | 19:09:52.721 STDOUT terraform:  } 2025-06-22 19:09:52.721451 | orchestrator | 19:09:52.721 STDOUT terraform:  } 2025-06-22 19:09:52.721455 | orchestrator | 19:09:52.721 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-06-22 19:09:52.721459 | orchestrator | 19:09:52.721 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-22 19:09:52.721462 | orchestrator | 19:09:52.721 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-22 19:09:52.721466 | orchestrator | 19:09:52.721 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-22 19:09:52.721470 | orchestrator | 19:09:52.721 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-22 19:09:52.721475 | orchestrator | 19:09:52.721 STDOUT terraform:  + all_tags = (known after apply) 2025-06-22 19:09:52.721726 | orchestrator | 19:09:52.721 STDOUT terraform:  + availability_zone = "nova" 2025-06-22 19:09:52.721741 | orchestrator | 19:09:52.721 STDOUT terraform:  + config_drive = true 2025-06-22 19:09:52.721749 | orchestrator | 19:09:52.721 STDOUT terraform:  + created = (known after apply) 2025-06-22 19:09:52.721753 | orchestrator | 19:09:52.721 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-22 19:09:52.721756 | orchestrator | 19:09:52.721 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-22 19:09:52.721760 | orchestrator | 19:09:52.721 STDOUT terraform:  + force_delete = false 2025-06-22 19:09:52.721764 | orchestrator | 19:09:52.721 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-22 19:09:52.721770 | orchestrator | 19:09:52.721 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.721773 | orchestrator | 19:09:52.721 STDOUT terraform:  + image_id = (known after apply) 2025-06-22 19:09:52.722047 | orchestrator | 19:09:52.721 STDOUT terraform:  + image_name = (known after apply) 2025-06-22 19:09:52.722057 | orchestrator | 19:09:52.721 STDOUT terraform:  + key_pair = "testbed" 2025-06-22 19:09:52.722061 | orchestrator | 19:09:52.721 STDOUT terraform:  + name = "testbed-node-4" 2025-06-22 19:09:52.722065 | orchestrator | 19:09:52.721 STDOUT terraform:  + power_state = "active" 2025-06-22 19:09:52.722069 | orchestrator | 19:09:52.721 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.722072 | orchestrator | 19:09:52.721 STDOUT terraform:  + security_groups = (known after apply) 2025-06-22 19:09:52.722076 | orchestrator | 19:09:52.721 STDOUT terraform:  + stop_before_destroy = false 2025-06-22 19:09:52.722080 | orchestrator | 19:09:52.721 STDOUT terraform:  + updated = (known after apply) 2025-06-22 19:09:52.722086 | orchestrator | 19:09:52.721 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-22 19:09:52.722090 | orchestrator | 19:09:52.722 STDOUT terraform:  + block_device { 2025-06-22 19:09:52.722729 | orchestrator | 19:09:52.722 STDOUT terraform:  + boot_index = 0 2025-06-22 19:09:52.722737 | orchestrator | 19:09:52.722 STDOUT terraform:  + delete_on_termination = false 2025-06-22 19:09:52.722740 | orchestrator | 19:09:52.722 STDOUT terraform:  + destination_type = "volume" 2025-06-22 19:09:52.722744 | orchestrator | 19:09:52.722 STDOUT terraform:  + multiattach = false 2025-06-22 19:09:52.722748 | orchestrator | 19:09:52.722 STDOUT terraform:  + source_type = "volume" 2025-06-22 19:09:52.722752 | orchestrator | 19:09:52.722 STDOUT terraform:  + uuid = (known after apply) 2025-06-22 19:09:52.722756 | orchestrator | 19:09:52.722 STDOUT terraform:  } 2025-06-22 19:09:52.722760 | orchestrator | 19:09:52.722 STDOUT terraform:  + network { 2025-06-22 19:09:52.722763 | orchestrator | 19:09:52.722 STDOUT terraform:  + access_network = false 2025-06-22 19:09:52.722767 | orchestrator | 19:09:52.722 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-22 19:09:52.722771 | orchestrator | 19:09:52.722 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-22 19:09:52.722774 | orchestrator | 19:09:52.722 STDOUT terraform:  + mac = (known after apply) 2025-06-22 19:09:52.722783 | orchestrator | 19:09:52.722 STDOUT terraform:  + name = (known after apply) 2025-06-22 19:09:52.722787 | orchestrator | 19:09:52.722 STDOUT terraform:  + port = (known after apply) 2025-06-22 19:09:52.722791 | orchestrator | 19:09:52.722 STDOUT terraform:  + uuid = (known after apply) 2025-06-22 19:09:52.722795 | orchestrator | 19:09:52.722 STDOUT terraform:  } 2025-06-22 19:09:52.722798 | orchestrator | 19:09:52.722 STDOUT terraform:  } 2025-06-22 19:09:52.722802 | orchestrator | 19:09:52.722 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-06-22 19:09:52.722806 | orchestrator | 19:09:52.722 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-06-22 19:09:52.722810 | orchestrator | 19:09:52.722 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-06-22 19:09:52.722813 | orchestrator | 19:09:52.722 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-06-22 19:09:52.722817 | orchestrator | 19:09:52.722 STDOUT terraform:  + all_metadata = (known after apply) 2025-06-22 19:09:52.722823 | orchestrator | 19:09:52.722 STDOUT terraform:  + all_tags = (known after apply) 2025-06-22 19:09:52.722826 | orchestrator | 19:09:52.722 STDOUT terraform:  + availability_zone = "nova" 2025-06-22 19:09:52.722830 | orchestrator | 19:09:52.722 STDOUT terraform:  + config_drive = true 2025-06-22 19:09:52.722834 | orchestrator | 19:09:52.722 STDOUT terraform:  + created = (known after apply) 2025-06-22 19:09:52.722839 | orchestrator | 19:09:52.722 STDOUT terraform:  + flavor_id = (known after apply) 2025-06-22 19:09:52.723473 | orchestrator | 19:09:52.722 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-06-22 19:09:52.723481 | orchestrator | 19:09:52.722 STDOUT terraform:  + force_delete = false 2025-06-22 19:09:52.723485 | orchestrator | 19:09:52.722 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-06-22 19:09:52.723488 | orchestrator | 19:09:52.722 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.723492 | orchestrator | 19:09:52.722 STDOUT terraform:  + image_id = (known after apply) 2025-06-22 19:09:52.723496 | orchestrator | 19:09:52.722 STDOUT terraform:  + image_name = (known after apply) 2025-06-22 19:09:52.723499 | orchestrator | 19:09:52.723 STDOUT terraform:  + key_pair = "testbed" 2025-06-22 19:09:52.723503 | orchestrator | 19:09:52.723 STDOUT terraform:  + name = "testbed-node-5" 2025-06-22 19:09:52.723507 | orchestrator | 19:09:52.723 STDOUT terraform:  + power_state = "active" 2025-06-22 19:09:52.723510 | orchestrator | 19:09:52.723 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.723514 | orchestrator | 19:09:52.723 STDOUT terraform:  + security_groups = (known after apply) 2025-06-22 19:09:52.723518 | orchestrator | 19:09:52.723 STDOUT terraform:  + stop_before_destroy = false 2025-06-22 19:09:52.723521 | orchestrator | 19:09:52.723 STDOUT terraform:  + updated = (known after apply) 2025-06-22 19:09:52.723525 | orchestrator | 19:09:52.723 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-06-22 19:09:52.723529 | orchestrator | 19:09:52.723 STDOUT terraform:  + block_device { 2025-06-22 19:09:52.723541 | orchestrator | 19:09:52.723 STDOUT terraform:  + boot_index = 0 2025-06-22 19:09:52.723545 | orchestrator | 19:09:52.723 STDOUT terraform:  + delete_on_termination = false 2025-06-22 19:09:52.723548 | orchestrator | 19:09:52.723 STDOUT terraform:  + destination_type = "volume" 2025-06-22 19:09:52.723552 | orchestrator | 19:09:52.723 STDOUT terraform:  + multiattach = false 2025-06-22 19:09:52.723556 | orchestrator | 19:09:52.723 STDOUT terraform:  + source_type = "volume" 2025-06-22 19:09:52.723559 | orchestrator | 19:09:52.723 STDOUT terraform:  + uuid = (known after apply) 2025-06-22 19:09:52.723563 | orchestrator | 19:09:52.723 STDOUT terraform:  } 2025-06-22 19:09:52.723567 | orchestrator | 19:09:52.723 STDOUT terraform:  + network { 2025-06-22 19:09:52.723570 | orchestrator | 19:09:52.723 STDOUT terraform:  + access_network = false 2025-06-22 19:09:52.723576 | orchestrator | 19:09:52.723 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-06-22 19:09:52.723580 | orchestrator | 19:09:52.723 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-06-22 19:09:52.723584 | orchestrator | 19:09:52.723 STDOUT terraform:  + mac = (known after apply) 2025-06-22 19:09:52.723588 | orchestrator | 19:09:52.723 STDOUT terraform:  + name = (known after apply) 2025-06-22 19:09:52.724440 | orchestrator | 19:09:52.723 STDOUT terraform:  + port = (known after apply) 2025-06-22 19:09:52.724447 | orchestrator | 19:09:52.723 STDOUT terraform:  + uuid = (known after apply) 2025-06-22 19:09:52.724450 | orchestrator | 19:09:52.723 STDOUT terraform:  } 2025-06-22 19:09:52.724454 | orchestrator | 19:09:52.723 STDOUT terraform:  } 2025-06-22 19:09:52.724458 | orchestrator | 19:09:52.723 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-06-22 19:09:52.724462 | orchestrator | 19:09:52.723 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-06-22 19:09:52.724465 | orchestrator | 19:09:52.723 STDOUT terraform:  + fingerprint = (known after apply) 2025-06-22 19:09:52.724469 | orchestrator | 19:09:52.723 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.724473 | orchestrator | 19:09:52.723 STDOUT terraform:  + name = "testbed" 2025-06-22 19:09:52.724476 | orchestrator | 19:09:52.723 STDOUT terraform:  + private_key = (sensitive value) 2025-06-22 19:09:52.724480 | orchestrator | 19:09:52.723 STDOUT terraform:  + public_key = (known after apply) 2025-06-22 19:09:52.724484 | orchestrator | 19:09:52.723 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.724490 | orchestrator | 19:09:52.723 STDOUT terraform:  + user_id = (known after apply) 2025-06-22 19:09:52.724494 | orchestrator | 19:09:52.723 STDOUT terraform:  } 2025-06-22 19:09:52.724498 | orchestrator | 19:09:52.723 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-06-22 19:09:52.724502 | orchestrator | 19:09:52.723 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-22 19:09:52.724506 | orchestrator | 19:09:52.724 STDOUT terraform:  + device = (known after apply) 2025-06-22 19:09:52.724509 | orchestrator | 19:09:52.724 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.724518 | orchestrator | 19:09:52.724 STDOUT terraform:  + instance_id = (known after apply) 2025-06-22 19:09:52.724522 | orchestrator | 19:09:52.724 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.724525 | orchestrator | 19:09:52.724 STDOUT terraform:  + volume_id = (known after apply) 2025-06-22 19:09:52.724529 | orchestrator | 19:09:52.724 STDOUT terraform:  } 2025-06-22 19:09:52.724533 | orchestrator | 19:09:52.724 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-06-22 19:09:52.724536 | orchestrator | 19:09:52.724 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-22 19:09:52.724540 | orchestrator | 19:09:52.724 STDOUT terraform:  + device = (known after apply) 2025-06-22 19:09:52.724544 | orchestrator | 19:09:52.724 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.724547 | orchestrator | 19:09:52.724 STDOUT terraform:  + instance_id = (known after apply) 2025-06-22 19:09:52.724551 | orchestrator | 19:09:52.724 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.724555 | orchestrator | 19:09:52.724 STDOUT terraform:  + volume_id = (known after apply) 2025-06-22 19:09:52.724558 | orchestrator | 19:09:52.724 STDOUT terraform:  } 2025-06-22 19:09:52.724562 | orchestrator | 19:09:52.724 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-06-22 19:09:52.724568 | orchestrator | 19:09:52.724 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-22 19:09:52.724572 | orchestrator | 19:09:52.724 STDOUT terraform:  + device = (known after apply) 2025-06-22 19:09:52.724576 | orchestrator | 19:09:52.724 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.724579 | orchestrator | 19:09:52.724 STDOUT terraform:  + instance_id = (known after apply) 2025-06-22 19:09:52.724585 | orchestrator | 19:09:52.724 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.725298 | orchestrator | 19:09:52.724 STDOUT terraform:  + volume_id = (known after apply) 2025-06-22 19:09:52.725307 | orchestrator | 19:09:52.724 STDOUT terraform:  } 2025-06-22 19:09:52.725310 | orchestrator | 19:09:52.724 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-06-22 19:09:52.725314 | orchestrator | 19:09:52.724 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-22 19:09:52.725318 | orchestrator | 19:09:52.724 STDOUT terraform:  + device = (known after apply) 2025-06-22 19:09:52.725322 | orchestrator | 19:09:52.724 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.725325 | orchestrator | 19:09:52.724 STDOUT terraform:  + instance_id = (known after apply) 2025-06-22 19:09:52.725329 | orchestrator | 19:09:52.724 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.725332 | orchestrator | 19:09:52.724 STDOUT terraform:  + volume_id = (known after apply) 2025-06-22 19:09:52.725336 | orchestrator | 19:09:52.724 STDOUT terraform:  } 2025-06-22 19:09:52.725340 | orchestrator | 19:09:52.724 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-06-22 19:09:52.725351 | orchestrator | 19:09:52.724 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-22 19:09:52.725355 | orchestrator | 19:09:52.724 STDOUT terraform:  + device = (known after apply) 2025-06-22 19:09:52.725359 | orchestrator | 19:09:52.724 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.725363 | orchestrator | 19:09:52.725 STDOUT terraform:  + instance_id = (known after apply) 2025-06-22 19:09:52.725366 | orchestrator | 19:09:52.725 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.725370 | orchestrator | 19:09:52.725 STDOUT terraform:  + volume_id = (known after apply) 2025-06-22 19:09:52.725374 | orchestrator | 19:09:52.725 STDOUT terraform:  } 2025-06-22 19:09:52.725378 | orchestrator | 19:09:52.725 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-06-22 19:09:52.725381 | orchestrator | 19:09:52.725 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-22 19:09:52.725385 | orchestrator | 19:09:52.725 STDOUT terraform:  + device = (known after apply) 2025-06-22 19:09:52.725389 | orchestrator | 19:09:52.725 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.725395 | orchestrator | 19:09:52.725 STDOUT terraform:  + instance_id = (known after apply) 2025-06-22 19:09:52.725398 | orchestrator | 19:09:52.725 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.725402 | orchestrator | 19:09:52.725 STDOUT terraform:  + volume_id = (known after apply) 2025-06-22 19:09:52.725406 | orchestrator | 19:09:52.725 STDOUT terraform:  } 2025-06-22 19:09:52.725411 | orchestrator | 19:09:52.725 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-06-22 19:09:52.726621 | orchestrator | 19:09:52.725 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-22 19:09:52.726630 | orchestrator | 19:09:52.725 STDOUT terraform:  + device = (known after apply) 2025-06-22 19:09:52.726634 | orchestrator | 19:09:52.725 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.726637 | orchestrator | 19:09:52.725 STDOUT terraform:  + instance_id = (known after apply) 2025-06-22 19:09:52.726641 | orchestrator | 19:09:52.725 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.726645 | orchestrator | 19:09:52.725 STDOUT terraform:  + volume_id = (known after apply) 2025-06-22 19:09:52.726649 | orchestrator | 19:09:52.725 STDOUT terraform:  } 2025-06-22 19:09:52.726652 | orchestrator | 19:09:52.725 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-06-22 19:09:52.726656 | orchestrator | 19:09:52.725 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-22 19:09:52.726660 | orchestrator | 19:09:52.725 STDOUT terraform:  + device = (known after apply) 2025-06-22 19:09:52.726663 | orchestrator | 19:09:52.725 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.726667 | orchestrator | 19:09:52.725 STDOUT terraform:  + instance_id = (known after apply) 2025-06-22 19:09:52.726671 | orchestrator | 19:09:52.725 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.726680 | orchestrator | 19:09:52.725 STDOUT terraform:  + volume_id = (known after apply) 2025-06-22 19:09:52.726684 | orchestrator | 19:09:52.725 STDOUT terraform:  } 2025-06-22 19:09:52.726687 | orchestrator | 19:09:52.725 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-06-22 19:09:52.726691 | orchestrator | 19:09:52.726 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-06-22 19:09:52.726699 | orchestrator | 19:09:52.726 STDOUT terraform:  + device = (known after apply) 2025-06-22 19:09:52.726703 | orchestrator | 19:09:52.726 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.726706 | orchestrator | 19:09:52.726 STDOUT terraform:  + instance_id = (known after apply) 2025-06-22 19:09:52.726710 | orchestrator | 19:09:52.726 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.726714 | orchestrator | 19:09:52.726 STDOUT terraform:  + volume_id = (known after apply) 2025-06-22 19:09:52.726717 | orchestrator | 19:09:52.726 STDOUT terraform:  } 2025-06-22 19:09:52.726724 | orchestrator | 19:09:52.726 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-06-22 19:09:52.726730 | orchestrator | 19:09:52.726 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-06-22 19:09:52.726734 | orchestrator | 19:09:52.726 STDOUT terraform:  + fixed_ip = (known after apply) 2025-06-22 19:09:52.726737 | orchestrator | 19:09:52.726 STDOUT terraform:  + floating_ip = (known after apply) 2025-06-22 19:09:52.726741 | orchestrator | 19:09:52.726 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.726745 | orchestrator | 19:09:52.726 STDOUT terraform:  + port_id = (known after apply) 2025-06-22 19:09:52.726757 | orchestrator | 19:09:52.726 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.726761 | orchestrator | 19:09:52.726 STDOUT terraform:  } 2025-06-22 19:09:52.726768 | orchestrator | 19:09:52.726 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-06-22 19:09:52.726772 | orchestrator | 19:09:52.726 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-06-22 19:09:52.726775 | orchestrator | 19:09:52.726 STDOUT terraform:  + address = (known after apply) 2025-06-22 19:09:52.726779 | orchestrator | 19:09:52.726 STDOUT terraform:  + all_tags = (known after apply) 2025-06-22 19:09:52.726783 | orchestrator | 19:09:52.726 STDOUT terraform:  + dns_domain = (known after apply) 2025-06-22 19:09:52.726788 | orchestrator | 19:09:52.726 STDOUT terraform:  + dns_name = (known after apply) 2025-06-22 19:09:52.726821 | orchestrator | 19:09:52.726 STDOUT terraform:  + fixed_ip = (known after apply) 2025-06-22 19:09:52.726855 | orchestrator | 19:09:52.726 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.726864 | orchestrator | 19:09:52.726 STDOUT terraform:  + pool = "public" 2025-06-22 19:09:52.726896 | orchestrator | 19:09:52.726 STDOUT terraform:  + port_id = (known after apply) 2025-06-22 19:09:52.726922 | orchestrator | 19:09:52.726 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.726950 | orchestrator | 19:09:52.726 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-22 19:09:52.726984 | orchestrator | 19:09:52.726 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-22 19:09:52.726992 | orchestrator | 19:09:52.726 STDOUT terraform:  } 2025-06-22 19:09:52.727034 | orchestrator | 19:09:52.726 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-06-22 19:09:52.727083 | orchestrator | 19:09:52.727 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-06-22 19:09:52.727127 | orchestrator | 19:09:52.727 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-22 19:09:52.727245 | orchestrator | 19:09:52.727 STDOUT terraform:  + all_tags = (known after apply) 2025-06-22 19:09:52.727255 | orchestrator | 19:09:52.727 STDOUT terraform:  + availability_zone_hints = [ 2025-06-22 19:09:52.727278 | orchestrator | 19:09:52.727 STDOUT terraform:  + "nova", 2025-06-22 19:09:52.727286 | orchestrator | 19:09:52.727 STDOUT terraform:  ] 2025-06-22 19:09:52.727330 | orchestrator | 19:09:52.727 STDOUT terraform:  + dns_domain = (known after apply) 2025-06-22 19:09:52.727374 | orchestrator | 19:09:52.727 STDOUT terraform:  + external = (known after apply) 2025-06-22 19:09:52.727410 | orchestrator | 19:09:52.727 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.727454 | orchestrator | 19:09:52.727 STDOUT terraform:  + mtu = (known after apply) 2025-06-22 19:09:52.727495 | orchestrator | 19:09:52.727 STDOUT terraform:  + name = "net-testbed-management" 2025-06-22 19:09:52.727536 | orchestrator | 19:09:52.727 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-22 19:09:52.727579 | orchestrator | 19:09:52.727 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-22 19:09:52.727625 | orchestrator | 19:09:52.727 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.727661 | orchestrator | 19:09:52.727 STDOUT terraform:  + shared = (known after apply) 2025-06-22 19:09:52.727704 | orchestrator | 19:09:52.727 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-22 19:09:52.727741 | orchestrator | 19:09:52.727 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-06-22 19:09:52.727766 | orchestrator | 19:09:52.727 STDOUT terraform:  + segments (known after apply) 2025-06-22 19:09:52.727774 | orchestrator | 19:09:52.727 STDOUT terraform:  } 2025-06-22 19:09:52.727868 | orchestrator | 19:09:52.727 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-06-22 19:09:52.727923 | orchestrator | 19:09:52.727 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-06-22 19:09:52.727957 | orchestrator | 19:09:52.727 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-22 19:09:52.727996 | orchestrator | 19:09:52.727 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-22 19:09:52.728033 | orchestrator | 19:09:52.727 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-22 19:09:52.728074 | orchestrator | 19:09:52.728 STDOUT terraform:  + all_tags = (known after apply) 2025-06-22 19:09:52.728115 | orchestrator | 19:09:52.728 STDOUT terraform:  + device_id = (known after apply) 2025-06-22 19:09:52.728152 | orchestrator | 19:09:52.728 STDOUT terraform:  + device_owner = (known after apply) 2025-06-22 19:09:52.728941 | orchestrator | 19:09:52.728 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-22 19:09:52.728948 | orchestrator | 19:09:52.728 STDOUT terraform:  + dns_name = (known after apply) 2025-06-22 19:09:52.728952 | orchestrator | 19:09:52.728 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.728956 | orchestrator | 19:09:52.728 STDOUT terraform:  + mac_address = (known after apply) 2025-06-22 19:09:52.728960 | orchestrator | 19:09:52.728 STDOUT terraform:  + network_id = (known after apply) 2025-06-22 19:09:52.728963 | orchestrator | 19:09:52.728 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-22 19:09:52.728967 | orchestrator | 19:09:52.728 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-22 19:09:52.728971 | orchestrator | 19:09:52.728 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.728974 | orchestrator | 19:09:52.728 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-22 19:09:52.728978 | orchestrator | 19:09:52.728 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-22 19:09:52.728982 | orchestrator | 19:09:52.728 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 19:09:52.728986 | orchestrator | 19:09:52.728 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-22 19:09:52.728989 | orchestrator | 19:09:52.728 STDOUT terraform:  } 2025-06-22 19:09:52.728993 | orchestrator | 19:09:52.728 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 19:09:52.728997 | orchestrator | 19:09:52.728 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-22 19:09:52.729000 | orchestrator | 19:09:52.728 STDOUT terraform:  } 2025-06-22 19:09:52.729004 | orchestrator | 19:09:52.728 STDOUT terraform:  + binding (known after apply) 2025-06-22 19:09:52.729008 | orchestrator | 19:09:52.728 STDOUT terraform:  + fixed_ip { 2025-06-22 19:09:52.729012 | orchestrator | 19:09:52.728 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-06-22 19:09:52.729016 | orchestrator | 19:09:52.728 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-22 19:09:52.729019 | orchestrator | 19:09:52.728 STDOUT terraform:  } 2025-06-22 19:09:52.729023 | orchestrator | 19:09:52.728 STDOUT terraform:  } 2025-06-22 19:09:52.729027 | orchestrator | 19:09:52.728 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-06-22 19:09:52.729030 | orchestrator | 19:09:52.728 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-22 19:09:52.729034 | orchestrator | 19:09:52.728 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-22 19:09:52.729038 | orchestrator | 19:09:52.728 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-22 19:09:52.729044 | orchestrator | 19:09:52.728 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-22 19:09:52.729048 | orchestrator | 19:09:52.728 STDOUT terraform:  + all_tags = (known after apply) 2025-06-22 19:09:52.729052 | orchestrator | 19:09:52.728 STDOUT terraform:  + device_id = (known after apply) 2025-06-22 19:09:52.729062 | orchestrator | 19:09:52.729 STDOUT terraform:  + device_owner = (known after apply) 2025-06-22 19:09:52.734060 | orchestrator | 19:09:52.729 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-22 19:09:52.734084 | orchestrator | 19:09:52.729 STDOUT terraform:  + dns_name = (known after apply) 2025-06-22 19:09:52.734088 | orchestrator | 19:09:52.729 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.734098 | orchestrator | 19:09:52.729 STDOUT terraform:  + mac_address = (known after apply) 2025-06-22 19:09:52.734102 | orchestrator | 19:09:52.729 STDOUT terraform:  + network_id = (known after apply) 2025-06-22 19:09:52.734106 | orchestrator | 19:09:52.729 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-22 19:09:52.734110 | orchestrator | 19:09:52.729 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-22 19:09:52.734114 | orchestrator | 19:09:52.729 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.734118 | orchestrator | 19:09:52.729 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-22 19:09:52.734121 | orchestrator | 19:09:52.729 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-22 19:09:52.734125 | orchestrator | 19:09:52.729 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 19:09:52.734129 | orchestrator | 19:09:52.729 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-22 19:09:52.734134 | orchestrator | 19:09:52.729 STDOUT terraform:  } 2025-06-22 19:09:52.734137 | orchestrator | 19:09:52.729 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 19:09:52.734141 | orchestrator | 19:09:52.729 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-22 19:09:52.734145 | orchestrator | 19:09:52.729 STDOUT terraform:  } 2025-06-22 19:09:52.734149 | orchestrator | 19:09:52.729 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 19:09:52.734152 | orchestrator | 19:09:52.729 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-22 19:09:52.734156 | orchestrator | 19:09:52.729 STDOUT terraform:  } 2025-06-22 19:09:52.734160 | orchestrator | 19:09:52.729 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 19:09:52.734164 | orchestrator | 19:09:52.729 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-22 19:09:52.734177 | orchestrator | 19:09:52.729 STDOUT terraform:  } 2025-06-22 19:09:52.734181 | orchestrator | 19:09:52.729 STDOUT terraform:  + binding (known after apply) 2025-06-22 19:09:52.734185 | orchestrator | 19:09:52.729 STDOUT terraform:  + fixed_ip { 2025-06-22 19:09:52.734189 | orchestrator | 19:09:52.729 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-06-22 19:09:52.734192 | orchestrator | 19:09:52.729 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-22 19:09:52.734196 | orchestrator | 19:09:52.729 STDOUT terraform:  } 2025-06-22 19:09:52.734200 | orchestrator | 19:09:52.729 STDOUT terraform:  } 2025-06-22 19:09:52.734204 | orchestrator | 19:09:52.729 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-06-22 19:09:52.734208 | orchestrator | 19:09:52.729 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-22 19:09:52.734220 | orchestrator | 19:09:52.729 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-22 19:09:52.734224 | orchestrator | 19:09:52.729 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-22 19:09:52.734227 | orchestrator | 19:09:52.729 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-22 19:09:52.734234 | orchestrator | 19:09:52.729 STDOUT terraform:  + all_tags = (known after apply) 2025-06-22 19:09:52.734238 | orchestrator | 19:09:52.729 STDOUT terraform:  + device_id = (known after apply) 2025-06-22 19:09:52.734242 | orchestrator | 19:09:52.730 STDOUT terraform:  + device_owner = (known after apply) 2025-06-22 19:09:52.734246 | orchestrator | 19:09:52.730 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-22 19:09:52.734249 | orchestrator | 19:09:52.730 STDOUT terraform:  + dns_name = (known after apply) 2025-06-22 19:09:52.734259 | orchestrator | 19:09:52.730 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.734263 | orchestrator | 19:09:52.730 STDOUT terraform:  + mac_address = (known after apply) 2025-06-22 19:09:52.734267 | orchestrator | 19:09:52.730 STDOUT terraform:  + network_id = (known after apply) 2025-06-22 19:09:52.734271 | orchestrator | 19:09:52.730 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-22 19:09:52.734274 | orchestrator | 19:09:52.730 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-22 19:09:52.734278 | orchestrator | 19:09:52.730 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.734282 | orchestrator | 19:09:52.730 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-22 19:09:52.734285 | orchestrator | 19:09:52.730 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-22 19:09:52.734289 | orchestrator | 19:09:52.730 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 19:09:52.734293 | orchestrator | 19:09:52.730 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-22 19:09:52.734297 | orchestrator | 19:09:52.730 STDOUT terraform:  } 2025-06-22 19:09:52.734300 | orchestrator | 19:09:52.730 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 19:09:52.734304 | orchestrator | 19:09:52.730 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-22 19:09:52.734308 | orchestrator | 19:09:52.730 STDOUT terraform:  } 2025-06-22 19:09:52.734312 | orchestrator | 19:09:52.730 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 19:09:52.734315 | orchestrator | 19:09:52.730 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-22 19:09:52.734319 | orchestrator | 19:09:52.730 STDOUT terraform:  } 2025-06-22 19:09:52.734323 | orchestrator | 19:09:52.730 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 19:09:52.734327 | orchestrator | 19:09:52.730 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-22 19:09:52.734330 | orchestrator | 19:09:52.730 STDOUT terraform:  } 2025-06-22 19:09:52.734334 | orchestrator | 19:09:52.730 STDOUT terraform:  + binding (known after apply) 2025-06-22 19:09:52.734338 | orchestrator | 19:09:52.730 STDOUT terraform:  + fixed_ip { 2025-06-22 19:09:52.734345 | orchestrator | 19:09:52.730 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-06-22 19:09:52.734349 | orchestrator | 19:09:52.730 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-22 19:09:52.734352 | orchestrator | 19:09:52.730 STDOUT terraform:  } 2025-06-22 19:09:52.734356 | orchestrator | 19:09:52.730 STDOUT terraform:  } 2025-06-22 19:09:52.734360 | orchestrator | 19:09:52.730 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-06-22 19:09:52.734364 | orchestrator | 19:09:52.730 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-22 19:09:52.734367 | orchestrator | 19:09:52.730 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-22 19:09:52.734371 | orchestrator | 19:09:52.730 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-22 19:09:52.734375 | orchestrator | 19:09:52.730 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-22 19:09:52.734378 | orchestrator | 19:09:52.730 STDOUT terraform:  + all_tags = (known after apply) 2025-06-22 19:09:52.734382 | orchestrator | 19:09:52.731 STDOUT terraform:  + device_id = (known after apply) 2025-06-22 19:09:52.734386 | orchestrator | 19:09:52.731 STDOUT terraform:  + device_owner = (known after apply) 2025-06-22 19:09:52.734390 | orchestrator | 19:09:52.731 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-22 19:09:52.734393 | orchestrator | 19:09:52.731 STDOUT terraform:  + dns_name = (known after apply) 2025-06-22 19:09:52.734397 | orchestrator | 19:09:52.731 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.734401 | orchestrator | 19:09:52.731 STDOUT terraform:  + mac_address = (known after apply) 2025-06-22 19:09:52.734404 | orchestrator | 19:09:52.731 STDOUT terraform:  + network_id = (known after apply) 2025-06-22 19:09:52.734412 | orchestrator | 19:09:52.731 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-22 19:09:52.734415 | orchestrator | 19:09:52.731 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-22 19:09:52.734419 | orchestrator | 19:09:52.731 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.734423 | orchestrator | 19:09:52.731 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-22 19:09:52.734427 | orchestrator | 19:09:52.731 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-22 19:09:52.734430 | orchestrator | 19:09:52.731 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 19:09:52.734434 | orchestrator | 19:09:52.731 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-22 19:09:52.734438 | orchestrator | 19:09:52.731 STDOUT terraform:  } 2025-06-22 19:09:52.734441 | orchestrator | 19:09:52.731 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 19:09:52.734445 | orchestrator | 19:09:52.731 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-22 19:09:52.734449 | orchestrator | 19:09:52.731 STDOUT terraform:  } 2025-06-22 19:09:52.734453 | orchestrator | 19:09:52.731 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 19:09:52.734456 | orchestrator | 19:09:52.731 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-22 19:09:52.734464 | orchestrator | 19:09:52.731 STDOUT terraform:  } 2025-06-22 19:09:52.734467 | orchestrator | 19:09:52.731 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 19:09:52.734471 | orchestrator | 19:09:52.731 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-22 19:09:52.734475 | orchestrator | 19:09:52.731 STDOUT terraform:  } 2025-06-22 19:09:52.734479 | orchestrator | 19:09:52.731 STDOUT terraform:  + binding (known after apply) 2025-06-22 19:09:52.734482 | orchestrator | 19:09:52.731 STDOUT terraform:  + fixed_ip { 2025-06-22 19:09:52.734486 | orchestrator | 19:09:52.731 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-06-22 19:09:52.734490 | orchestrator | 19:09:52.731 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-22 19:09:52.734494 | orchestrator | 19:09:52.731 STDOUT terraform:  } 2025-06-22 19:09:52.734497 | orchestrator | 19:09:52.731 STDOUT terraform:  } 2025-06-22 19:09:52.734501 | orchestrator | 19:09:52.731 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-06-22 19:09:52.734505 | orchestrator | 19:09:52.731 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-22 19:09:52.734509 | orchestrator | 19:09:52.731 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-22 19:09:52.734513 | orchestrator | 19:09:52.731 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-22 19:09:52.734516 | orchestrator | 19:09:52.731 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-22 19:09:52.734520 | orchestrator | 19:09:52.731 STDOUT terraform:  + all_tags = (known after apply) 2025-06-22 19:09:52.734524 | orchestrator | 19:09:52.732 STDOUT terraform:  + device_id = (known after apply) 2025-06-22 19:09:52.734527 | orchestrator | 19:09:52.732 STDOUT terraform:  + device_owner = (known after apply) 2025-06-22 19:09:52.734536 | orchestrator | 19:09:52.732 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-22 19:09:52.734540 | orchestrator | 19:09:52.732 STDOUT terraform:  + dns_name = (known after apply) 2025-06-22 19:09:52.734546 | orchestrator | 19:09:52.732 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.734550 | orchestrator | 19:09:52.732 STDOUT terraform:  + mac_address = (known after apply) 2025-06-22 19:09:52.734554 | orchestrator | 19:09:52.732 STDOUT terraform:  + network_id = (known after apply) 2025-06-22 19:09:52.734557 | orchestrator | 19:09:52.732 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-22 19:09:52.734561 | orchestrator | 19:09:52.732 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-22 19:09:52.734569 | orchestrator | 19:09:52.732 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.734573 | orchestrator | 19:09:52.732 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-22 19:09:52.734577 | orchestrator | 19:09:52.732 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-22 19:09:52.734580 | orchestrator | 19:09:52.732 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 19:09:52.734584 | orchestrator | 19:09:52.732 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-22 19:09:52.734603 | orchestrator | 19:09:52.732 STDOUT terraform:  } 2025-06-22 19:09:52.734607 | orchestrator | 19:09:52.732 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 19:09:52.734611 | orchestrator | 19:09:52.732 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-22 19:09:52.734614 | orchestrator | 19:09:52.732 STDOUT terraform:  } 2025-06-22 19:09:52.734618 | orchestrator | 19:09:52.732 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 19:09:52.734622 | orchestrator | 19:09:52.732 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-22 19:09:52.734626 | orchestrator | 19:09:52.732 STDOUT terraform:  } 2025-06-22 19:09:52.734629 | orchestrator | 19:09:52.732 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 19:09:52.734633 | orchestrator | 19:09:52.732 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-22 19:09:52.734637 | orchestrator | 19:09:52.732 STDOUT terraform:  } 2025-06-22 19:09:52.734641 | orchestrator | 19:09:52.732 STDOUT terraform:  + binding (known after apply) 2025-06-22 19:09:52.734644 | orchestrator | 19:09:52.732 STDOUT terraform:  + fixed_ip { 2025-06-22 19:09:52.734648 | orchestrator | 19:09:52.732 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-06-22 19:09:52.734652 | orchestrator | 19:09:52.732 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-22 19:09:52.734656 | orchestrator | 19:09:52.732 STDOUT terraform:  } 2025-06-22 19:09:52.734659 | orchestrator | 19:09:52.732 STDOUT terraform:  } 2025-06-22 19:09:52.734663 | orchestrator | 19:09:52.732 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-06-22 19:09:52.734667 | orchestrator | 19:09:52.732 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-22 19:09:52.734671 | orchestrator | 19:09:52.732 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-22 19:09:52.734674 | orchestrator | 19:09:52.732 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-22 19:09:52.734678 | orchestrator | 19:09:52.732 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-22 19:09:52.734682 | orchestrator | 19:09:52.732 STDOUT terraform:  + all_tags = (known after apply) 2025-06-22 19:09:52.734685 | orchestrator | 19:09:52.733 STDOUT terraform:  + device_id = (known after apply) 2025-06-22 19:09:52.734689 | orchestrator | 19:09:52.733 STDOUT terraform:  + device_owner = (known after apply) 2025-06-22 19:09:52.734693 | orchestrator | 19:09:52.733 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-22 19:09:52.734696 | orchestrator | 19:09:52.733 STDOUT terraform:  + dns_name = (known after apply) 2025-06-22 19:09:52.734700 | orchestrator | 19:09:52.733 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.734704 | orchestrator | 19:09:52.733 STDOUT terraform:  + mac_address = (known after apply) 2025-06-22 19:09:52.734708 | orchestrator | 19:09:52.733 STDOUT terraform:  + network_id = (known after apply) 2025-06-22 19:09:52.734714 | orchestrator | 19:09:52.733 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-22 19:09:52.734722 | orchestrator | 19:09:52.733 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-22 19:09:52.734726 | orchestrator | 19:09:52.733 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.734730 | orchestrator | 19:09:52.733 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-22 19:09:52.734733 | orchestrator | 19:09:52.733 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-22 19:09:52.734741 | orchestrator | 19:09:52.733 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 19:09:52.734745 | orchestrator | 19:09:52.733 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-22 19:09:52.734748 | orchestrator | 19:09:52.733 STDOUT terraform:  } 2025-06-22 19:09:52.734752 | orchestrator | 19:09:52.733 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 19:09:52.734756 | orchestrator | 19:09:52.733 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-22 19:09:52.734760 | orchestrator | 19:09:52.733 STDOUT terraform:  } 2025-06-22 19:09:52.734763 | orchestrator | 19:09:52.733 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 19:09:52.734767 | orchestrator | 19:09:52.733 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-22 19:09:52.734771 | orchestrator | 19:09:52.733 STDOUT terraform:  } 2025-06-22 19:09:52.734774 | orchestrator | 19:09:52.733 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 19:09:52.734778 | orchestrator | 19:09:52.733 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-22 19:09:52.734782 | orchestrator | 19:09:52.733 STDOUT terraform:  } 2025-06-22 19:09:52.734785 | orchestrator | 19:09:52.733 STDOUT terraform:  + binding (known after apply) 2025-06-22 19:09:52.734789 | orchestrator | 19:09:52.733 STDOUT terraform:  + fixed_ip { 2025-06-22 19:09:52.734793 | orchestrator | 19:09:52.733 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-06-22 19:09:52.734797 | orchestrator | 19:09:52.733 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-22 19:09:52.734800 | orchestrator | 19:09:52.733 STDOUT terraform:  } 2025-06-22 19:09:52.734804 | orchestrator | 19:09:52.733 STDOUT terraform:  } 2025-06-22 19:09:52.734808 | orchestrator | 19:09:52.733 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-06-22 19:09:52.734812 | orchestrator | 19:09:52.733 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-06-22 19:09:52.734815 | orchestrator | 19:09:52.733 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-22 19:09:52.734819 | orchestrator | 19:09:52.733 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-06-22 19:09:52.734823 | orchestrator | 19:09:52.733 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-06-22 19:09:52.734827 | orchestrator | 19:09:52.733 STDOUT terraform:  + all_tags = (known after apply) 2025-06-22 19:09:52.734830 | orchestrator | 19:09:52.733 STDOUT terraform:  + device_id = (known after apply) 2025-06-22 19:09:52.734834 | orchestrator | 19:09:52.734 STDOUT terraform:  + device_owner = (known after apply) 2025-06-22 19:09:52.734838 | orchestrator | 19:09:52.734 STDOUT terraform:  + dns_assignment = (known after apply) 2025-06-22 19:09:52.734845 | orchestrator | 19:09:52.734 STDOUT terraform:  + dns_name = (known after apply) 2025-06-22 19:09:52.734850 | orchestrator | 19:09:52.734 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.734853 | orchestrator | 19:09:52.734 STDOUT terraform:  + mac_address = (known after apply) 2025-06-22 19:09:52.734857 | orchestrator | 19:09:52.734 STDOUT terraform:  + network_id = (known after apply) 2025-06-22 19:09:52.734861 | orchestrator | 19:09:52.734 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-06-22 19:09:52.734864 | orchestrator | 19:09:52.734 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-06-22 19:09:52.734868 | orchestrator | 19:09:52.734 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.734872 | orchestrator | 19:09:52.734 STDOUT terraform:  + security_group_ids = (known after apply) 2025-06-22 19:09:52.734875 | orchestrator | 19:09:52.734 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-22 19:09:52.734879 | orchestrator | 19:09:52.734 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 19:09:52.734883 | orchestrator | 19:09:52.734 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-06-22 19:09:52.734887 | orchestrator | 19:09:52.734 STDOUT terraform:  } 2025-06-22 19:09:52.734893 | orchestrator | 19:09:52.734 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 19:09:52.734897 | orchestrator | 19:09:52.734 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-06-22 19:09:52.734901 | orchestrator | 19:09:52.734 STDOUT terraform:  } 2025-06-22 19:09:52.734904 | orchestrator | 19:09:52.734 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 19:09:52.734908 | orchestrator | 19:09:52.734 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-06-22 19:09:52.734912 | orchestrator | 19:09:52.734 STDOUT terraform:  } 2025-06-22 19:09:52.734916 | orchestrator | 19:09:52.734 STDOUT terraform:  + allowed_address_pairs { 2025-06-22 19:09:52.734919 | orchestrator | 19:09:52.734 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-06-22 19:09:52.734923 | orchestrator | 19:09:52.734 STDOUT terraform:  } 2025-06-22 19:09:52.734927 | orchestrator | 19:09:52.734 STDOUT terraform:  + binding (known after apply) 2025-06-22 19:09:52.734930 | orchestrator | 19:09:52.734 STDOUT terraform:  + fixed_ip { 2025-06-22 19:09:52.734934 | orchestrator | 19:09:52.734 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-06-22 19:09:52.734938 | orchestrator | 19:09:52.734 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-22 19:09:52.734941 | orchestrator | 19:09:52.734 STDOUT terraform:  } 2025-06-22 19:09:52.734945 | orchestrator | 19:09:52.734 STDOUT terraform:  } 2025-06-22 19:09:52.734949 | orchestrator | 19:09:52.734 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-06-22 19:09:52.734953 | orchestrator | 19:09:52.734 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-06-22 19:09:52.734956 | orchestrator | 19:09:52.734 STDOUT terraform:  + force_destroy = false 2025-06-22 19:09:52.734960 | orchestrator | 19:09:52.734 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.734969 | orchestrator | 19:09:52.734 STDOUT terraform:  + port_id = (known after apply) 2025-06-22 19:09:52.734973 | orchestrator | 19:09:52.734 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.734976 | orchestrator | 19:09:52.734 STDOUT terraform:  + router_id = (known after apply) 2025-06-22 19:09:52.736701 | orchestrator | 19:09:52.734 STDOUT terraform:  + subnet_id = (known after apply) 2025-06-22 19:09:52.736711 | orchestrator | 19:09:52.734 STDOUT terraform:  } 2025-06-22 19:09:52.736716 | orchestrator | 19:09:52.735 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-06-22 19:09:52.736719 | orchestrator | 19:09:52.735 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-06-22 19:09:52.736723 | orchestrator | 19:09:52.735 STDOUT terraform:  + admin_state_up = (known after apply) 2025-06-22 19:09:52.736727 | orchestrator | 19:09:52.735 STDOUT terraform:  + all_tags = (known after apply) 2025-06-22 19:09:52.736730 | orchestrator | 19:09:52.735 STDOUT terraform:  + availability_zone_hints = [ 2025-06-22 19:09:52.736735 | orchestrator | 19:09:52.735 STDOUT terraform:  + "nova", 2025-06-22 19:09:52.736739 | orchestrator | 19:09:52.735 STDOUT terraform:  ] 2025-06-22 19:09:52.736743 | orchestrator | 19:09:52.735 STDOUT terraform:  + distributed = (known after apply) 2025-06-22 19:09:52.736746 | orchestrator | 19:09:52.735 STDOUT terraform:  + enable_snat = (known after apply) 2025-06-22 19:09:52.736750 | orchestrator | 19:09:52.735 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-06-22 19:09:52.736760 | orchestrator | 19:09:52.735 STDOUT terraform:  + external_qos_policy_id = (known after apply) 2025-06-22 19:09:52.736764 | orchestrator | 19:09:52.735 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.736767 | orchestrator | 19:09:52.735 STDOUT terraform:  + name = "testbed" 2025-06-22 19:09:52.736771 | orchestrator | 19:09:52.735 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.736775 | orchestrator | 19:09:52.735 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-22 19:09:52.736778 | orchestrator | 19:09:52.735 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-06-22 19:09:52.736782 | orchestrator | 19:09:52.735 STDOUT terraform:  } 2025-06-22 19:09:52.736786 | orchestrator | 19:09:52.735 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-06-22 19:09:52.736791 | orchestrator | 19:09:52.735 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-06-22 19:09:52.736794 | orchestrator | 19:09:52.735 STDOUT terraform:  + description = "ssh" 2025-06-22 19:09:52.736798 | orchestrator | 19:09:52.735 STDOUT terraform:  + direction = "ingress" 2025-06-22 19:09:52.736802 | orchestrator | 19:09:52.735 STDOUT terraform:  + ethertype = "IPv4" 2025-06-22 19:09:52.736805 | orchestrator | 19:09:52.735 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.736809 | orchestrator | 19:09:52.735 STDOUT terraform:  + port_range_max = 22 2025-06-22 19:09:52.736813 | orchestrator | 19:09:52.735 STDOUT terraform:  + port_range_min = 22 2025-06-22 19:09:52.736821 | orchestrator | 19:09:52.735 STDOUT terraform:  + protocol = "tcp" 2025-06-22 19:09:52.736825 | orchestrator | 19:09:52.735 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.736828 | orchestrator | 19:09:52.735 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-06-22 19:09:52.736832 | orchestrator | 19:09:52.735 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-22 19:09:52.736836 | orchestrator | 19:09:52.735 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-22 19:09:52.736839 | orchestrator | 19:09:52.735 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-22 19:09:52.736843 | orchestrator | 19:09:52.735 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-22 19:09:52.736847 | orchestrator | 19:09:52.735 STDOUT terraform:  } 2025-06-22 19:09:52.736851 | orchestrator | 19:09:52.736 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-06-22 19:09:52.736858 | orchestrator | 19:09:52.736 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-06-22 19:09:52.736862 | orchestrator | 19:09:52.736 STDOUT terraform:  + description = "wireguard" 2025-06-22 19:09:52.736866 | orchestrator | 19:09:52.736 STDOUT terraform:  + direction = "ingress" 2025-06-22 19:09:52.736870 | orchestrator | 19:09:52.736 STDOUT terraform:  + ethertype = "IPv4" 2025-06-22 19:09:52.736873 | orchestrator | 19:09:52.736 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.736877 | orchestrator | 19:09:52.736 STDOUT terraform:  + port_range_max = 51820 2025-06-22 19:09:52.736881 | orchestrator | 19:09:52.736 STDOUT terraform:  + port_range_min = 51820 2025-06-22 19:09:52.736884 | orchestrator | 19:09:52.736 STDOUT terraform:  + protocol = "udp" 2025-06-22 19:09:52.736888 | orchestrator | 19:09:52.736 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.736892 | orchestrator | 19:09:52.736 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-06-22 19:09:52.736896 | orchestrator | 19:09:52.736 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-22 19:09:52.736899 | orchestrator | 19:09:52.736 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-22 19:09:52.736906 | orchestrator | 19:09:52.736 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-22 19:09:52.736910 | orchestrator | 19:09:52.736 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-22 19:09:52.736913 | orchestrator | 19:09:52.736 STDOUT terraform:  } 2025-06-22 19:09:52.736917 | orchestrator | 19:09:52.736 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-06-22 19:09:52.736921 | orchestrator | 19:09:52.736 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-06-22 19:09:52.736925 | orchestrator | 19:09:52.736 STDOUT terraform:  + direction = "ingress" 2025-06-22 19:09:52.736928 | orchestrator | 19:09:52.736 STDOUT terraform:  + ethertype = "IPv4" 2025-06-22 19:09:52.736935 | orchestrator | 19:09:52.736 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.736939 | orchestrator | 19:09:52.736 STDOUT terraform:  + protocol = "tcp" 2025-06-22 19:09:52.736943 | orchestrator | 19:09:52.736 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.736947 | orchestrator | 19:09:52.736 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-06-22 19:09:52.736950 | orchestrator | 19:09:52.736 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-22 19:09:52.736956 | orchestrator | 19:09:52.736 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-06-22 19:09:52.736960 | orchestrator | 19:09:52.736 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-22 19:09:52.739151 | orchestrator | 19:09:52.736 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-22 19:09:52.739162 | orchestrator | 19:09:52.736 STDOUT terraform:  } 2025-06-22 19:09:52.739175 | orchestrator | 19:09:52.737 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-06-22 19:09:52.739179 | orchestrator | 19:09:52.737 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-06-22 19:09:52.739183 | orchestrator | 19:09:52.737 STDOUT terraform:  + direction = "ingress" 2025-06-22 19:09:52.739187 | orchestrator | 19:09:52.737 STDOUT terraform:  + ethertype = "IPv4" 2025-06-22 19:09:52.739190 | orchestrator | 19:09:52.737 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.739194 | orchestrator | 19:09:52.737 STDOUT terraform:  + protocol = "udp" 2025-06-22 19:09:52.739198 | orchestrator | 19:09:52.737 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.739201 | orchestrator | 19:09:52.737 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-06-22 19:09:52.739205 | orchestrator | 19:09:52.737 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-22 19:09:52.739209 | orchestrator | 19:09:52.737 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-06-22 19:09:52.739212 | orchestrator | 19:09:52.737 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-22 19:09:52.739216 | orchestrator | 19:09:52.737 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-22 19:09:52.739220 | orchestrator | 19:09:52.737 STDOUT terraform:  } 2025-06-22 19:09:52.739224 | orchestrator | 19:09:52.737 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-06-22 19:09:52.739227 | orchestrator | 19:09:52.737 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-06-22 19:09:52.739231 | orchestrator | 19:09:52.737 STDOUT terraform:  + direction = "ingress" 2025-06-22 19:09:52.739235 | orchestrator | 19:09:52.737 STDOUT terraform:  + ethertype = "IPv4" 2025-06-22 19:09:52.739238 | orchestrator | 19:09:52.737 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.739242 | orchestrator | 19:09:52.737 STDOUT terraform:  + protocol = "icmp" 2025-06-22 19:09:52.739255 | orchestrator | 19:09:52.737 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.739259 | orchestrator | 19:09:52.737 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-06-22 19:09:52.739263 | orchestrator | 19:09:52.737 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-22 19:09:52.739266 | orchestrator | 19:09:52.737 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-22 19:09:52.739270 | orchestrator | 19:09:52.737 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-22 19:09:52.739274 | orchestrator | 19:09:52.737 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-22 19:09:52.739278 | orchestrator | 19:09:52.737 STDOUT terraform:  } 2025-06-22 19:09:52.739281 | orchestrator | 19:09:52.737 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-06-22 19:09:52.739285 | orchestrator | 19:09:52.737 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-06-22 19:09:52.739289 | orchestrator | 19:09:52.737 STDOUT terraform:  + direction = "ingress" 2025-06-22 19:09:52.739293 | orchestrator | 19:09:52.737 STDOUT terraform:  + ethertype = "IPv4" 2025-06-22 19:09:52.739296 | orchestrator | 19:09:52.737 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.742037 | orchestrator | 19:09:52.737 STDOUT terraform:  + protocol = "tcp" 2025-06-22 19:09:52.742050 | orchestrator | 19:09:52.740 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.742054 | orchestrator | 19:09:52.740 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-06-22 19:09:52.742058 | orchestrator | 19:09:52.740 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-22 19:09:52.742062 | orchestrator | 19:09:52.740 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-22 19:09:52.742066 | orchestrator | 19:09:52.740 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-22 19:09:52.742069 | orchestrator | 19:09:52.740 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-22 19:09:52.742073 | orchestrator | 19:09:52.740 STDOUT terraform:  } 2025-06-22 19:09:52.742077 | orchestrator | 19:09:52.740 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-06-22 19:09:52.742081 | orchestrator | 19:09:52.740 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-06-22 19:09:52.742085 | orchestrator | 19:09:52.740 STDOUT terraform:  + direction = "ingress" 2025-06-22 19:09:52.742088 | orchestrator | 19:09:52.740 STDOUT terraform:  + ethertype = "IPv4" 2025-06-22 19:09:52.742092 | orchestrator | 19:09:52.740 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.742096 | orchestrator | 19:09:52.740 STDOUT terraform:  + protocol = "udp" 2025-06-22 19:09:52.742099 | orchestrator | 19:09:52.740 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.742103 | orchestrator | 19:09:52.740 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-06-22 19:09:52.742107 | orchestrator | 19:09:52.740 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-22 19:09:52.742117 | orchestrator | 19:09:52.740 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-22 19:09:52.742121 | orchestrator | 19:09:52.740 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-22 19:09:52.742125 | orchestrator | 19:09:52.740 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-22 19:09:52.742128 | orchestrator | 19:09:52.740 STDOUT terraform:  } 2025-06-22 19:09:52.742132 | orchestrator | 19:09:52.740 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-06-22 19:09:52.742136 | orchestrator | 19:09:52.740 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-06-22 19:09:52.742140 | orchestrator | 19:09:52.741 STDOUT terraform:  + direction = "ingress" 2025-06-22 19:09:52.742144 | orchestrator | 19:09:52.741 STDOUT terraform:  + ethertype = "IPv4" 2025-06-22 19:09:52.742148 | orchestrator | 19:09:52.741 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.742151 | orchestrator | 19:09:52.741 STDOUT terraform:  + protocol = "icmp" 2025-06-22 19:09:52.742155 | orchestrator | 19:09:52.741 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.742159 | orchestrator | 19:09:52.741 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-06-22 19:09:52.742162 | orchestrator | 19:09:52.741 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-22 19:09:52.742188 | orchestrator | 19:09:52.741 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-22 19:09:52.742192 | orchestrator | 19:09:52.741 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-22 19:09:52.745058 | orchestrator | 19:09:52.741 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-22 19:09:52.745067 | orchestrator | 19:09:52.742 STDOUT terraform:  } 2025-06-22 19:09:52.745071 | orchestrator | 19:09:52.742 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-06-22 19:09:52.745075 | orchestrator | 19:09:52.742 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-06-22 19:09:52.745079 | orchestrator | 19:09:52.742 STDOUT terraform:  + description = "vrrp" 2025-06-22 19:09:52.745083 | orchestrator | 19:09:52.742 STDOUT terraform:  + direction = "ingress" 2025-06-22 19:09:52.745087 | orchestrator | 19:09:52.742 STDOUT terraform:  + ethertype = "IPv4" 2025-06-22 19:09:52.745114 | orchestrator | 19:09:52.742 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.745118 | orchestrator | 19:09:52.742 STDOUT terraform:  + protocol = "112" 2025-06-22 19:09:52.745122 | orchestrator | 19:09:52.742 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.745126 | orchestrator | 19:09:52.742 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-06-22 19:09:52.745129 | orchestrator | 19:09:52.742 STDOUT terraform:  + remote_group_id = (known after apply) 2025-06-22 19:09:52.745133 | orchestrator | 19:09:52.742 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-06-22 19:09:52.745143 | orchestrator | 19:09:52.742 STDOUT terraform:  + security_group_id = (known after apply) 2025-06-22 19:09:52.745147 | orchestrator | 19:09:52.742 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-22 19:09:52.745151 | orchestrator | 19:09:52.742 STDOUT terraform:  } 2025-06-22 19:09:52.745155 | orchestrator | 19:09:52.742 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-06-22 19:09:52.745158 | orchestrator | 19:09:52.742 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-06-22 19:09:52.745162 | orchestrator | 19:09:52.743 STDOUT terraform:  + all_tags = (known after apply) 2025-06-22 19:09:52.745189 | orchestrator | 19:09:52.743 STDOUT terraform:  + description = "management security group" 2025-06-22 19:09:52.745193 | orchestrator | 19:09:52.743 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.745197 | orchestrator | 19:09:52.743 STDOUT terraform:  + name = "testbed-management" 2025-06-22 19:09:52.745201 | orchestrator | 19:09:52.743 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.745204 | orchestrator | 19:09:52.743 STDOUT terraform:  + stateful = (known after apply) 2025-06-22 19:09:52.745208 | orchestrator | 19:09:52.743 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-22 19:09:52.745212 | orchestrator | 19:09:52.743 STDOUT terraform:  } 2025-06-22 19:09:52.745215 | orchestrator | 19:09:52.743 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-06-22 19:09:52.745219 | orchestrator | 19:09:52.743 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-06-22 19:09:52.745226 | orchestrator | 19:09:52.743 STDOUT terraform:  + all_tags = (known after apply) 2025-06-22 19:09:52.745229 | orchestrator | 19:09:52.743 STDOUT terraform:  + description = "node security group" 2025-06-22 19:09:52.745233 | orchestrator | 19:09:52.743 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.745237 | orchestrator | 19:09:52.743 STDOUT terraform:  + name = "testbed-node" 2025-06-22 19:09:52.745241 | orchestrator | 19:09:52.743 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.745244 | orchestrator | 19:09:52.743 STDOUT terraform:  + stateful = (known after apply) 2025-06-22 19:09:52.745248 | orchestrator | 19:09:52.743 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-22 19:09:52.745252 | orchestrator | 19:09:52.743 STDOUT terraform:  } 2025-06-22 19:09:52.745261 | orchestrator | 19:09:52.743 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-06-22 19:09:52.745265 | orchestrator | 19:09:52.743 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-06-22 19:09:52.745268 | orchestrator | 19:09:52.743 STDOUT terraform:  + all_tags = (known after apply) 2025-06-22 19:09:52.745272 | orchestrator | 19:09:52.743 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-06-22 19:09:52.745276 | orchestrator | 19:09:52.743 STDOUT terraform:  + dns_nameservers = [ 2025-06-22 19:09:52.745280 | orchestrator | 19:09:52.743 STDOUT terraform:  + "8.8.8.8", 2025-06-22 19:09:52.745283 | orchestrator | 19:09:52.743 STDOUT terraform:  + "9.9.9.9", 2025-06-22 19:09:52.745294 | orchestrator | 19:09:52.743 STDOUT terraform:  ] 2025-06-22 19:09:52.745298 | orchestrator | 19:09:52.743 STDOUT terraform:  + enable_dhcp = true 2025-06-22 19:09:52.745301 | orchestrator | 19:09:52.743 STDOUT terraform:  + gateway_ip = (known after apply) 2025-06-22 19:09:52.745305 | orchestrator | 19:09:52.743 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.745309 | orchestrator | 19:09:52.743 STDOUT terraform:  + ip_version = 4 2025-06-22 19:09:52.745313 | orchestrator | 19:09:52.743 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-06-22 19:09:52.745316 | orchestrator | 19:09:52.743 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-06-22 19:09:52.745320 | orchestrator | 19:09:52.743 STDOUT terraform:  + name = "subnet-testbed-management" 2025-06-22 19:09:52.745324 | orchestrator | 19:09:52.744 STDOUT terraform:  + network_id = (known after apply) 2025-06-22 19:09:52.745327 | orchestrator | 19:09:52.744 STDOUT terraform:  + no_gateway = false 2025-06-22 19:09:52.745331 | orchestrator | 19:09:52.744 STDOUT terraform:  + region = (known after apply) 2025-06-22 19:09:52.745335 | orchestrator | 19:09:52.744 STDOUT terraform:  + service_types = (known after apply) 2025-06-22 19:09:52.745338 | orchestrator | 19:09:52.744 STDOUT terraform:  + tenant_id = (known after apply) 2025-06-22 19:09:52.745342 | orchestrator | 19:09:52.744 STDOUT terraform:  + allocation_pool { 2025-06-22 19:09:52.745346 | orchestrator | 19:09:52.744 STDOUT terraform:  + end = "192.168.31.250" 2025-06-22 19:09:52.745349 | orchestrator | 19:09:52.744 STDOUT terraform:  + start = "192.168.31.200" 2025-06-22 19:09:52.745353 | orchestrator | 19:09:52.744 STDOUT terraform:  } 2025-06-22 19:09:52.745357 | orchestrator | 19:09:52.744 STDOUT terraform:  } 2025-06-22 19:09:52.745361 | orchestrator | 19:09:52.744 STDOUT terraform:  # terraform_data.image will be created 2025-06-22 19:09:52.745365 | orchestrator | 19:09:52.744 STDOUT terraform:  + resource "terraform_data" "image" { 2025-06-22 19:09:52.745368 | orchestrator | 19:09:52.744 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.745372 | orchestrator | 19:09:52.744 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-06-22 19:09:52.745376 | orchestrator | 19:09:52.744 STDOUT terraform:  + output = (known after apply) 2025-06-22 19:09:52.745379 | orchestrator | 19:09:52.744 STDOUT terraform:  } 2025-06-22 19:09:52.745383 | orchestrator | 19:09:52.744 STDOUT terraform:  # terraform_data.image_node will be created 2025-06-22 19:09:52.745389 | orchestrator | 19:09:52.744 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-06-22 19:09:52.745393 | orchestrator | 19:09:52.744 STDOUT terraform:  + id = (known after apply) 2025-06-22 19:09:52.745397 | orchestrator | 19:09:52.744 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-06-22 19:09:52.745400 | orchestrator | 19:09:52.744 STDOUT terraform:  + output = (known after apply) 2025-06-22 19:09:52.745404 | orchestrator | 19:09:52.744 STDOUT terraform:  } 2025-06-22 19:09:52.745408 | orchestrator | 19:09:52.744 STDOUT terraform: Plan: 64 to add, 0 to change, 0 to destroy. 2025-06-22 19:09:52.745411 | orchestrator | 19:09:52.744 STDOUT terraform: Changes to Outputs: 2025-06-22 19:09:52.745420 | orchestrator | 19:09:52.744 STDOUT terraform:  + manager_address = (sensitive value) 2025-06-22 19:09:52.745423 | orchestrator | 19:09:52.744 STDOUT terraform:  + private_key = (sensitive value) 2025-06-22 19:09:52.933183 | orchestrator | 19:09:52.931 STDOUT terraform: terraform_data.image: Creating... 2025-06-22 19:09:52.933257 | orchestrator | 19:09:52.932 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=4b86a093-b678-87c9-3c49-e804d3ac5ea4] 2025-06-22 19:09:52.934137 | orchestrator | 19:09:52.934 STDOUT terraform: terraform_data.image_node: Creating... 2025-06-22 19:09:52.935249 | orchestrator | 19:09:52.935 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=7c965f91-321c-1047-9b45-7aef752e9a2d] 2025-06-22 19:09:52.961714 | orchestrator | 19:09:52.961 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-06-22 19:09:52.961794 | orchestrator | 19:09:52.961 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-06-22 19:09:52.966269 | orchestrator | 19:09:52.966 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-06-22 19:09:52.970036 | orchestrator | 19:09:52.969 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-06-22 19:09:52.994106 | orchestrator | 19:09:52.980 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-06-22 19:09:52.994185 | orchestrator | 19:09:52.980 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-06-22 19:09:52.994193 | orchestrator | 19:09:52.980 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-06-22 19:09:53.013008 | orchestrator | 19:09:53.002 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-06-22 19:09:53.013077 | orchestrator | 19:09:53.003 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-06-22 19:09:53.013084 | orchestrator | 19:09:53.003 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-06-22 19:09:53.445523 | orchestrator | 19:09:53.445 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 0s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-06-22 19:09:53.456310 | orchestrator | 19:09:53.455 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-06-22 19:09:53.516950 | orchestrator | 19:09:53.516 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2025-06-22 19:09:53.526494 | orchestrator | 19:09:53.526 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-06-22 19:09:53.787530 | orchestrator | 19:09:53.787 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 1s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-06-22 19:09:53.795610 | orchestrator | 19:09:53.795 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-06-22 19:09:59.085527 | orchestrator | 19:09:59.085 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 6s [id=f8bc7ceb-e6e7-44d9-94ca-8dc2693fede0] 2025-06-22 19:09:59.097700 | orchestrator | 19:09:59.097 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-06-22 19:10:02.965441 | orchestrator | 19:10:02.965 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Still creating... [10s elapsed] 2025-06-22 19:10:02.970538 | orchestrator | 19:10:02.970 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Still creating... [10s elapsed] 2025-06-22 19:10:02.980904 | orchestrator | 19:10:02.980 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Still creating... [10s elapsed] 2025-06-22 19:10:02.981992 | orchestrator | 19:10:02.981 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Still creating... [10s elapsed] 2025-06-22 19:10:03.004365 | orchestrator | 19:10:03.004 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Still creating... [10s elapsed] 2025-06-22 19:10:03.004496 | orchestrator | 19:10:03.004 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Still creating... [10s elapsed] 2025-06-22 19:10:03.457398 | orchestrator | 19:10:03.457 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Still creating... [10s elapsed] 2025-06-22 19:10:03.527809 | orchestrator | 19:10:03.527 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Still creating... [10s elapsed] 2025-06-22 19:10:03.614675 | orchestrator | 19:10:03.613 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 11s [id=0d04e2ba-3abe-44e6-a0ea-4a597e46ae81] 2025-06-22 19:10:03.629825 | orchestrator | 19:10:03.629 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-06-22 19:10:03.634070 | orchestrator | 19:10:03.633 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=50450a8e6418a2c5114503681859860b8f6488f6] 2025-06-22 19:10:03.647522 | orchestrator | 19:10:03.644 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-06-22 19:10:03.647579 | orchestrator | 19:10:03.645 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 11s [id=1702d6d9-f6d5-467e-9c44-3c93c3ac891d] 2025-06-22 19:10:03.662077 | orchestrator | 19:10:03.653 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=cf2bee928dfdfb9bd5767f61eba72d48cd6c1e9d] 2025-06-22 19:10:03.666613 | orchestrator | 19:10:03.666 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 11s [id=b25991b3-37fd-407a-b13b-d136271ca727] 2025-06-22 19:10:03.676599 | orchestrator | 19:10:03.676 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 11s [id=78e15a4e-0b6b-4de0-bd2a-417fc55af8a3] 2025-06-22 19:10:03.678932 | orchestrator | 19:10:03.678 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-06-22 19:10:03.679478 | orchestrator | 19:10:03.679 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-06-22 19:10:03.680039 | orchestrator | 19:10:03.679 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-06-22 19:10:03.683315 | orchestrator | 19:10:03.683 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 11s [id=71e43d47-057b-4609-853f-9ccf72c5a295] 2025-06-22 19:10:03.693829 | orchestrator | 19:10:03.693 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-06-22 19:10:03.693961 | orchestrator | 19:10:03.693 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-06-22 19:10:03.698293 | orchestrator | 19:10:03.697 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 11s [id=61868cbd-84da-463e-9017-284301fda41a] 2025-06-22 19:10:03.711834 | orchestrator | 19:10:03.709 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-06-22 19:10:03.728086 | orchestrator | 19:10:03.727 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 11s [id=bbdef6ad-891d-4656-ac9b-bc24d19b561e] 2025-06-22 19:10:03.735032 | orchestrator | 19:10:03.734 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-06-22 19:10:03.797894 | orchestrator | 19:10:03.797 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Still creating... [10s elapsed] 2025-06-22 19:10:04.096979 | orchestrator | 19:10:04.096 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 10s [id=a49b6e77-acd0-4f36-887b-4e4ec75cdfa4] 2025-06-22 19:10:04.148609 | orchestrator | 19:10:04.148 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 10s [id=67ec265c-9b93-46b0-85f4-348a71cc884e] 2025-06-22 19:10:09.099121 | orchestrator | 19:10:09.098 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Still creating... [10s elapsed] 2025-06-22 19:10:09.471348 | orchestrator | 19:10:09.397 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 10s [id=53c0873c-2742-4dc0-b64a-13866304fad2] 2025-06-22 19:10:09.758689 | orchestrator | 19:10:09.758 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 6s [id=987edc8c-68fd-441a-aa74-3f6244e5a45b] 2025-06-22 19:10:09.766939 | orchestrator | 19:10:09.766 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-06-22 19:10:13.680614 | orchestrator | 19:10:13.680 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Still creating... [10s elapsed] 2025-06-22 19:10:13.681907 | orchestrator | 19:10:13.681 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Still creating... [10s elapsed] 2025-06-22 19:10:13.684023 | orchestrator | 19:10:13.683 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Still creating... [10s elapsed] 2025-06-22 19:10:13.694498 | orchestrator | 19:10:13.694 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Still creating... [10s elapsed] 2025-06-22 19:10:13.707778 | orchestrator | 19:10:13.707 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Still creating... [10s elapsed] 2025-06-22 19:10:13.736441 | orchestrator | 19:10:13.736 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Still creating... [10s elapsed] 2025-06-22 19:10:14.113621 | orchestrator | 19:10:14.113 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 10s [id=1a9b05ce-765c-474d-953e-4ab57c149179] 2025-06-22 19:10:14.159038 | orchestrator | 19:10:14.158 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 10s [id=807aa02f-11b0-4381-a55e-c1f77ace1900] 2025-06-22 19:10:14.186063 | orchestrator | 19:10:14.185 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 10s [id=81f2e499-4268-4bd5-a5ff-46d49ba2fab9] 2025-06-22 19:10:14.210592 | orchestrator | 19:10:14.210 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 10s [id=abc8cf8e-645f-44ba-8ef9-2fedd7dd22d1] 2025-06-22 19:10:14.216951 | orchestrator | 19:10:14.216 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 10s [id=7b905425-1e63-4e6b-b5e5-61336c9bc17c] 2025-06-22 19:10:14.235221 | orchestrator | 19:10:14.234 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 10s [id=2156dda8-7e6f-4624-a0c0-e6117c9c49b9] 2025-06-22 19:10:17.785206 | orchestrator | 19:10:17.784 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 8s [id=d4b2b44c-4de4-4d1b-b6d5-5fff0b1f748e] 2025-06-22 19:10:17.792384 | orchestrator | 19:10:17.792 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-06-22 19:10:17.793139 | orchestrator | 19:10:17.792 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-06-22 19:10:17.794369 | orchestrator | 19:10:17.794 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-06-22 19:10:18.034692 | orchestrator | 19:10:18.033 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=c7a418ab-55fc-46f5-b7c1-8e1bce29ee02] 2025-06-22 19:10:18.055252 | orchestrator | 19:10:18.054 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-06-22 19:10:18.055300 | orchestrator | 19:10:18.055 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-06-22 19:10:18.055677 | orchestrator | 19:10:18.055 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-06-22 19:10:18.058175 | orchestrator | 19:10:18.057 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-06-22 19:10:18.058228 | orchestrator | 19:10:18.058 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-06-22 19:10:18.062790 | orchestrator | 19:10:18.061 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-06-22 19:10:18.218488 | orchestrator | 19:10:18.218 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 0s [id=8f2ad403-6472-4513-ab98-2c7169c1a04a] 2025-06-22 19:10:18.238078 | orchestrator | 19:10:18.237 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=9ac82095-c6f4-45e2-8554-eb5807ec5433] 2025-06-22 19:10:18.247836 | orchestrator | 19:10:18.247 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-06-22 19:10:18.250926 | orchestrator | 19:10:18.250 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-06-22 19:10:18.251033 | orchestrator | 19:10:18.250 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-06-22 19:10:18.253666 | orchestrator | 19:10:18.253 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-06-22 19:10:18.405133 | orchestrator | 19:10:18.404 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=3e42596b-57fb-40e1-a869-9401602dd63c] 2025-06-22 19:10:18.417932 | orchestrator | 19:10:18.417 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-06-22 19:10:18.578003 | orchestrator | 19:10:18.577 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=5aae30d1-ab21-4927-a237-96a60a40ada3] 2025-06-22 19:10:18.591986 | orchestrator | 19:10:18.591 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-06-22 19:10:18.700625 | orchestrator | 19:10:18.700 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=d0c2961e-4290-45d2-a9b5-0313e93a29ba] 2025-06-22 19:10:18.723858 | orchestrator | 19:10:18.723 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-06-22 19:10:18.937751 | orchestrator | 19:10:18.937 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=61ae14fa-e296-4742-8ba0-b0f2e81fd8c2] 2025-06-22 19:10:18.947753 | orchestrator | 19:10:18.947 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=28fd629e-92a9-4da3-9127-a5c2d2aeb6bc] 2025-06-22 19:10:18.950788 | orchestrator | 19:10:18.950 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-06-22 19:10:18.965947 | orchestrator | 19:10:18.965 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-06-22 19:10:19.122235 | orchestrator | 19:10:19.121 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=b930d2bf-f29f-4490-84a4-11cbce8946c9] 2025-06-22 19:10:19.137271 | orchestrator | 19:10:19.137 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-06-22 19:10:19.300386 | orchestrator | 19:10:19.300 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=03c82df5-eade-4a01-838f-dce3bb271372] 2025-06-22 19:10:19.306069 | orchestrator | 19:10:19.305 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=6c5436d9-febc-44ff-a73b-854596deeff1] 2025-06-22 19:10:23.726079 | orchestrator | 19:10:23.725 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 6s [id=fd5b3dfc-5023-49fc-a168-6716d582af0e] 2025-06-22 19:10:24.042521 | orchestrator | 19:10:24.041 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 6s [id=ddf2ed67-5fe1-40ac-81f5-8dc04b8b1393] 2025-06-22 19:10:24.247231 | orchestrator | 19:10:24.246 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 5s [id=80267aca-d2ca-43c7-ad44-c77ebbd5584a] 2025-06-22 19:10:24.514608 | orchestrator | 19:10:24.514 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 6s [id=ef6eae2a-e410-4aaf-a343-ed44e1149d7d] 2025-06-22 19:10:24.601423 | orchestrator | 19:10:24.601 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 6s [id=29a8cc1c-77dc-450b-a102-48e06d5c126b] 2025-06-22 19:10:24.678085 | orchestrator | 19:10:24.677 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 6s [id=979d5280-fab9-4b07-bc53-541f45b8120e] 2025-06-22 19:10:25.395286 | orchestrator | 19:10:25.394 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 7s [id=e015b595-df24-430b-803d-f31c1e74771b] 2025-06-22 19:10:25.404898 | orchestrator | 19:10:25.404 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-06-22 19:10:25.544779 | orchestrator | 19:10:25.544 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 7s [id=845fc830-b130-45f8-90f4-2dad714b512f] 2025-06-22 19:10:25.569699 | orchestrator | 19:10:25.569 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-06-22 19:10:25.571396 | orchestrator | 19:10:25.571 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-06-22 19:10:25.586220 | orchestrator | 19:10:25.586 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-06-22 19:10:25.586320 | orchestrator | 19:10:25.586 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-06-22 19:10:25.595560 | orchestrator | 19:10:25.595 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-06-22 19:10:25.600649 | orchestrator | 19:10:25.600 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-06-22 19:10:31.695121 | orchestrator | 19:10:31.694 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 7s [id=f37b8ef6-9e73-4f02-a28c-02a5398e369e] 2025-06-22 19:10:31.709487 | orchestrator | 19:10:31.709 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-06-22 19:10:31.712787 | orchestrator | 19:10:31.712 STDOUT terraform: local_file.inventory: Creating... 2025-06-22 19:10:31.718709 | orchestrator | 19:10:31.718 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-06-22 19:10:31.720269 | orchestrator | 19:10:31.719 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=33c76e0d0d49332af33766aa01a2612b7853d7c7] 2025-06-22 19:10:31.723724 | orchestrator | 19:10:31.723 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=6bc617df006f0adead20eb86364ec2602cd35d6d] 2025-06-22 19:10:32.455616 | orchestrator | 19:10:32.455 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 0s [id=f37b8ef6-9e73-4f02-a28c-02a5398e369e] 2025-06-22 19:10:35.570911 | orchestrator | 19:10:35.570 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-06-22 19:10:35.575039 | orchestrator | 19:10:35.574 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-06-22 19:10:35.587526 | orchestrator | 19:10:35.587 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-06-22 19:10:35.587755 | orchestrator | 19:10:35.587 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-06-22 19:10:35.599614 | orchestrator | 19:10:35.599 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-06-22 19:10:35.601858 | orchestrator | 19:10:35.601 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-06-22 19:10:45.571427 | orchestrator | 19:10:45.571 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-06-22 19:10:45.575443 | orchestrator | 19:10:45.575 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-06-22 19:10:45.587729 | orchestrator | 19:10:45.587 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-06-22 19:10:45.587899 | orchestrator | 19:10:45.587 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-06-22 19:10:45.600186 | orchestrator | 19:10:45.599 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-06-22 19:10:45.602579 | orchestrator | 19:10:45.602 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-06-22 19:10:46.007395 | orchestrator | 19:10:46.006 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 20s [id=2694c716-bcee-4f6c-865d-ec45b7f08f7b] 2025-06-22 19:10:46.037748 | orchestrator | 19:10:46.037 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 20s [id=313f76dd-db22-4a01-9312-faf3c0eab63e] 2025-06-22 19:10:46.094343 | orchestrator | 19:10:46.093 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 20s [id=bd2f1fcd-b9e2-425d-828e-0ed32ae9d2d0] 2025-06-22 19:10:46.181660 | orchestrator | 19:10:46.181 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 20s [id=a9ea7f19-ddd4-4f59-bd18-9c5987a97e17] 2025-06-22 19:10:55.588233 | orchestrator | 19:10:55.587 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2025-06-22 19:10:55.601012 | orchestrator | 19:10:55.600 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2025-06-22 19:10:56.101769 | orchestrator | 19:10:56.101 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 30s [id=5b028046-3f55-499f-8ae8-f156da6d474a] 2025-06-22 19:10:56.381299 | orchestrator | 19:10:56.380 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 30s [id=0f9216d4-3f33-4e92-b982-ccd97070f375] 2025-06-22 19:10:56.408254 | orchestrator | 19:10:56.408 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-06-22 19:10:56.410360 | orchestrator | 19:10:56.410 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=8090923731307904945] 2025-06-22 19:10:56.411067 | orchestrator | 19:10:56.410 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-06-22 19:10:56.419764 | orchestrator | 19:10:56.419 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-06-22 19:10:56.423584 | orchestrator | 19:10:56.423 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-06-22 19:10:56.439686 | orchestrator | 19:10:56.439 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-06-22 19:10:56.439738 | orchestrator | 19:10:56.439 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-06-22 19:10:56.439939 | orchestrator | 19:10:56.439 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-06-22 19:10:56.445093 | orchestrator | 19:10:56.444 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-06-22 19:10:56.446852 | orchestrator | 19:10:56.446 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-06-22 19:10:56.452469 | orchestrator | 19:10:56.452 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-06-22 19:10:56.462045 | orchestrator | 19:10:56.461 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-06-22 19:11:01.756652 | orchestrator | 19:11:01.756 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 6s [id=313f76dd-db22-4a01-9312-faf3c0eab63e/bbdef6ad-891d-4656-ac9b-bc24d19b561e] 2025-06-22 19:11:01.788585 | orchestrator | 19:11:01.788 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 6s [id=0f9216d4-3f33-4e92-b982-ccd97070f375/61868cbd-84da-463e-9017-284301fda41a] 2025-06-22 19:11:01.806185 | orchestrator | 19:11:01.805 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 6s [id=313f76dd-db22-4a01-9312-faf3c0eab63e/a49b6e77-acd0-4f36-887b-4e4ec75cdfa4] 2025-06-22 19:11:01.823816 | orchestrator | 19:11:01.823 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 6s [id=0f9216d4-3f33-4e92-b982-ccd97070f375/71e43d47-057b-4609-853f-9ccf72c5a295] 2025-06-22 19:11:01.830771 | orchestrator | 19:11:01.830 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 6s [id=a9ea7f19-ddd4-4f59-bd18-9c5987a97e17/67ec265c-9b93-46b0-85f4-348a71cc884e] 2025-06-22 19:11:01.858461 | orchestrator | 19:11:01.857 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 6s [id=a9ea7f19-ddd4-4f59-bd18-9c5987a97e17/0d04e2ba-3abe-44e6-a0ea-4a597e46ae81] 2025-06-22 19:11:01.886782 | orchestrator | 19:11:01.886 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 6s [id=0f9216d4-3f33-4e92-b982-ccd97070f375/b25991b3-37fd-407a-b13b-d136271ca727] 2025-06-22 19:11:01.915019 | orchestrator | 19:11:01.914 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 6s [id=a9ea7f19-ddd4-4f59-bd18-9c5987a97e17/78e15a4e-0b6b-4de0-bd2a-417fc55af8a3] 2025-06-22 19:11:03.192248 | orchestrator | 19:11:03.191 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 7s [id=313f76dd-db22-4a01-9312-faf3c0eab63e/1702d6d9-f6d5-467e-9c44-3c93c3ac891d] 2025-06-22 19:11:06.463660 | orchestrator | 19:11:06.463 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-06-22 19:11:16.464150 | orchestrator | 19:11:16.463 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-06-22 19:11:16.932881 | orchestrator | 19:11:16.932 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=092313f2-0951-41fc-9939-843298d08d5e] 2025-06-22 19:11:16.965252 | orchestrator | 19:11:16.964 STDOUT terraform: Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2025-06-22 19:11:16.965368 | orchestrator | 19:11:16.965 STDOUT terraform: Outputs: 2025-06-22 19:11:16.965398 | orchestrator | 19:11:16.965 STDOUT terraform: manager_address = 2025-06-22 19:11:16.965410 | orchestrator | 19:11:16.965 STDOUT terraform: private_key = 2025-06-22 19:11:17.068779 | orchestrator | ok: Runtime: 0:01:34.043851 2025-06-22 19:11:17.098229 | 2025-06-22 19:11:17.098371 | TASK [Fetch manager address] 2025-06-22 19:11:17.595440 | orchestrator | ok 2025-06-22 19:11:17.607522 | 2025-06-22 19:11:17.607799 | TASK [Set manager_host address] 2025-06-22 19:11:17.688396 | orchestrator | ok 2025-06-22 19:11:17.699208 | 2025-06-22 19:11:17.699352 | LOOP [Update ansible collections] 2025-06-22 19:11:24.580127 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-06-22 19:11:24.580573 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-06-22 19:11:24.580627 | orchestrator | Starting galaxy collection install process 2025-06-22 19:11:24.580652 | orchestrator | Process install dependency map 2025-06-22 19:11:24.580674 | orchestrator | Starting collection install process 2025-06-22 19:11:24.580695 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed04/.ansible/collections/ansible_collections/osism/commons' 2025-06-22 19:11:24.580719 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed04/.ansible/collections/ansible_collections/osism/commons 2025-06-22 19:11:24.580743 | orchestrator | osism.commons:999.0.0 was installed successfully 2025-06-22 19:11:24.580793 | orchestrator | ok: Item: commons Runtime: 0:00:06.542718 2025-06-22 19:11:28.814656 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-06-22 19:11:28.814772 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-06-22 19:11:28.814908 | orchestrator | Starting galaxy collection install process 2025-06-22 19:11:28.814951 | orchestrator | Process install dependency map 2025-06-22 19:11:28.814982 | orchestrator | Starting collection install process 2025-06-22 19:11:28.815011 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed04/.ansible/collections/ansible_collections/osism/services' 2025-06-22 19:11:28.815038 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed04/.ansible/collections/ansible_collections/osism/services 2025-06-22 19:11:28.815066 | orchestrator | osism.services:999.0.0 was installed successfully 2025-06-22 19:11:28.815127 | orchestrator | ok: Item: services Runtime: 0:00:03.920311 2025-06-22 19:11:28.839410 | 2025-06-22 19:11:28.839552 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-06-22 19:11:39.392794 | orchestrator | ok 2025-06-22 19:11:39.401476 | 2025-06-22 19:11:39.401585 | TASK [Wait a little longer for the manager so that everything is ready] 2025-06-22 19:12:39.449847 | orchestrator | ok 2025-06-22 19:12:39.465496 | 2025-06-22 19:12:39.465642 | TASK [Fetch manager ssh hostkey] 2025-06-22 19:12:41.067594 | orchestrator | Output suppressed because no_log was given 2025-06-22 19:12:41.083513 | 2025-06-22 19:12:41.083689 | TASK [Get ssh keypair from terraform environment] 2025-06-22 19:12:41.621041 | orchestrator | ok: Runtime: 0:00:00.009169 2025-06-22 19:12:41.632071 | 2025-06-22 19:12:41.632229 | TASK [Point out that the following task takes some time and does not give any output] 2025-06-22 19:12:41.676689 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-06-22 19:12:41.685293 | 2025-06-22 19:12:41.685421 | TASK [Run manager part 0] 2025-06-22 19:12:43.035659 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-06-22 19:12:43.182256 | orchestrator | 2025-06-22 19:12:43.182309 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-06-22 19:12:43.182317 | orchestrator | 2025-06-22 19:12:43.182331 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-06-22 19:12:44.994153 | orchestrator | ok: [testbed-manager] 2025-06-22 19:12:44.994222 | orchestrator | 2025-06-22 19:12:44.994285 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-06-22 19:12:44.994305 | orchestrator | 2025-06-22 19:12:44.994319 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-22 19:12:48.395575 | orchestrator | ok: [testbed-manager] 2025-06-22 19:12:48.395742 | orchestrator | 2025-06-22 19:12:48.395765 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-06-22 19:12:49.070413 | orchestrator | ok: [testbed-manager] 2025-06-22 19:12:49.070501 | orchestrator | 2025-06-22 19:12:49.070519 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-06-22 19:12:49.230233 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:12:49.230362 | orchestrator | 2025-06-22 19:12:49.230386 | orchestrator | TASK [Update package cache] **************************************************** 2025-06-22 19:12:49.265220 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:12:49.265326 | orchestrator | 2025-06-22 19:12:49.265337 | orchestrator | TASK [Install required packages] *********************************************** 2025-06-22 19:12:49.292927 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:12:49.292996 | orchestrator | 2025-06-22 19:12:49.293006 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-06-22 19:12:49.323569 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:12:49.323615 | orchestrator | 2025-06-22 19:12:49.323624 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-06-22 19:12:49.382679 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:12:49.382768 | orchestrator | 2025-06-22 19:12:49.382787 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-06-22 19:12:49.433024 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:12:49.433072 | orchestrator | 2025-06-22 19:12:49.433080 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-06-22 19:12:49.458619 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:12:49.458661 | orchestrator | 2025-06-22 19:12:49.458668 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-06-22 19:12:50.264807 | orchestrator | changed: [testbed-manager] 2025-06-22 19:12:50.264864 | orchestrator | 2025-06-22 19:12:50.264873 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-06-22 19:16:06.544462 | orchestrator | changed: [testbed-manager] 2025-06-22 19:16:06.544562 | orchestrator | 2025-06-22 19:16:06.544581 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-06-22 19:17:30.824348 | orchestrator | changed: [testbed-manager] 2025-06-22 19:17:30.824473 | orchestrator | 2025-06-22 19:17:30.824491 | orchestrator | TASK [Install required packages] *********************************************** 2025-06-22 19:17:53.528198 | orchestrator | changed: [testbed-manager] 2025-06-22 19:17:53.528275 | orchestrator | 2025-06-22 19:17:53.528286 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-06-22 19:18:02.768163 | orchestrator | changed: [testbed-manager] 2025-06-22 19:18:02.768251 | orchestrator | 2025-06-22 19:18:02.768268 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-06-22 19:18:02.820049 | orchestrator | ok: [testbed-manager] 2025-06-22 19:18:02.820091 | orchestrator | 2025-06-22 19:18:02.820100 | orchestrator | TASK [Get current user] ******************************************************** 2025-06-22 19:18:03.681355 | orchestrator | ok: [testbed-manager] 2025-06-22 19:18:03.681456 | orchestrator | 2025-06-22 19:18:03.681475 | orchestrator | TASK [Create venv directory] *************************************************** 2025-06-22 19:18:04.510877 | orchestrator | changed: [testbed-manager] 2025-06-22 19:18:04.510917 | orchestrator | 2025-06-22 19:18:04.510926 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-06-22 19:18:11.163087 | orchestrator | changed: [testbed-manager] 2025-06-22 19:18:11.163162 | orchestrator | 2025-06-22 19:18:11.163191 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-06-22 19:18:17.508292 | orchestrator | changed: [testbed-manager] 2025-06-22 19:18:17.508350 | orchestrator | 2025-06-22 19:18:17.508367 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-06-22 19:18:20.123307 | orchestrator | changed: [testbed-manager] 2025-06-22 19:18:20.123420 | orchestrator | 2025-06-22 19:18:20.123437 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-06-22 19:18:21.919604 | orchestrator | changed: [testbed-manager] 2025-06-22 19:18:21.919696 | orchestrator | 2025-06-22 19:18:21.919713 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-06-22 19:18:23.055715 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-06-22 19:18:23.055806 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-06-22 19:18:23.055823 | orchestrator | 2025-06-22 19:18:23.055838 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-06-22 19:18:23.099906 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-06-22 19:18:23.099988 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-06-22 19:18:23.100004 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-06-22 19:18:23.100087 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-06-22 19:18:32.715278 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-06-22 19:18:32.715397 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-06-22 19:18:32.715414 | orchestrator | 2025-06-22 19:18:32.715427 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-06-22 19:18:33.364080 | orchestrator | changed: [testbed-manager] 2025-06-22 19:18:33.364163 | orchestrator | 2025-06-22 19:18:33.364178 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-06-22 19:19:56.090102 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-06-22 19:19:56.090208 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-06-22 19:19:56.090228 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-06-22 19:19:56.090241 | orchestrator | 2025-06-22 19:19:56.090254 | orchestrator | TASK [Install local collections] *********************************************** 2025-06-22 19:19:58.468993 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-06-22 19:19:58.469082 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-06-22 19:19:58.469097 | orchestrator | 2025-06-22 19:19:58.469110 | orchestrator | PLAY [Create operator user] **************************************************** 2025-06-22 19:19:58.469123 | orchestrator | 2025-06-22 19:19:58.469135 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-22 19:20:00.000662 | orchestrator | ok: [testbed-manager] 2025-06-22 19:20:00.000770 | orchestrator | 2025-06-22 19:20:00.000791 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-06-22 19:20:00.039219 | orchestrator | ok: [testbed-manager] 2025-06-22 19:20:00.039336 | orchestrator | 2025-06-22 19:20:00.039379 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-06-22 19:20:00.108466 | orchestrator | ok: [testbed-manager] 2025-06-22 19:20:00.108580 | orchestrator | 2025-06-22 19:20:00.108598 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-06-22 19:20:00.932504 | orchestrator | changed: [testbed-manager] 2025-06-22 19:20:00.933055 | orchestrator | 2025-06-22 19:20:00.933084 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-06-22 19:20:01.696751 | orchestrator | changed: [testbed-manager] 2025-06-22 19:20:01.696799 | orchestrator | 2025-06-22 19:20:01.696809 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-06-22 19:20:03.097679 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-06-22 19:20:03.097717 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-06-22 19:20:03.097722 | orchestrator | 2025-06-22 19:20:03.097734 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-06-22 19:20:04.511895 | orchestrator | changed: [testbed-manager] 2025-06-22 19:20:04.511977 | orchestrator | 2025-06-22 19:20:04.511989 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-06-22 19:20:06.363143 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-06-22 19:20:06.363222 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-06-22 19:20:06.363235 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-06-22 19:20:06.363245 | orchestrator | 2025-06-22 19:20:06.363257 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-06-22 19:20:06.409365 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:20:06.409464 | orchestrator | 2025-06-22 19:20:06.409479 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-06-22 19:20:06.994469 | orchestrator | changed: [testbed-manager] 2025-06-22 19:20:06.994541 | orchestrator | 2025-06-22 19:20:06.994554 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-06-22 19:20:07.063821 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:20:07.063871 | orchestrator | 2025-06-22 19:20:07.063877 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-06-22 19:20:07.981169 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-22 19:20:07.981259 | orchestrator | changed: [testbed-manager] 2025-06-22 19:20:07.981275 | orchestrator | 2025-06-22 19:20:07.981288 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-06-22 19:20:08.018587 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:20:08.018626 | orchestrator | 2025-06-22 19:20:08.018634 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-06-22 19:20:08.058767 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:20:08.058810 | orchestrator | 2025-06-22 19:20:08.058819 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-06-22 19:20:08.100842 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:20:08.100881 | orchestrator | 2025-06-22 19:20:08.100889 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-06-22 19:20:08.146186 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:20:08.146227 | orchestrator | 2025-06-22 19:20:08.146235 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-06-22 19:20:09.013013 | orchestrator | ok: [testbed-manager] 2025-06-22 19:20:09.013086 | orchestrator | 2025-06-22 19:20:09.013099 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-06-22 19:20:09.013108 | orchestrator | 2025-06-22 19:20:09.013116 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-22 19:20:10.574693 | orchestrator | ok: [testbed-manager] 2025-06-22 19:20:10.574775 | orchestrator | 2025-06-22 19:20:10.574792 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-06-22 19:20:11.593378 | orchestrator | changed: [testbed-manager] 2025-06-22 19:20:11.593500 | orchestrator | 2025-06-22 19:20:11.593517 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:20:11.593531 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2025-06-22 19:20:11.593543 | orchestrator | 2025-06-22 19:20:11.981322 | orchestrator | ok: Runtime: 0:07:29.699891 2025-06-22 19:20:11.999787 | 2025-06-22 19:20:11.999931 | TASK [Point out that the log in on the manager is now possible] 2025-06-22 19:20:12.047969 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-06-22 19:20:12.059452 | 2025-06-22 19:20:12.059601 | TASK [Point out that the following task takes some time and does not give any output] 2025-06-22 19:20:12.096368 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-06-22 19:20:12.105189 | 2025-06-22 19:20:12.105335 | TASK [Run manager part 1 + 2] 2025-06-22 19:20:13.106593 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-06-22 19:20:13.166929 | orchestrator | 2025-06-22 19:20:13.167014 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-06-22 19:20:13.167032 | orchestrator | 2025-06-22 19:20:13.167062 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-22 19:20:16.199751 | orchestrator | ok: [testbed-manager] 2025-06-22 19:20:16.199843 | orchestrator | 2025-06-22 19:20:16.199895 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-06-22 19:20:16.251857 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:20:16.251928 | orchestrator | 2025-06-22 19:20:16.251944 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-06-22 19:20:16.298381 | orchestrator | ok: [testbed-manager] 2025-06-22 19:20:16.298452 | orchestrator | 2025-06-22 19:20:16.298461 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-06-22 19:20:16.333912 | orchestrator | ok: [testbed-manager] 2025-06-22 19:20:16.333997 | orchestrator | 2025-06-22 19:20:16.334062 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-06-22 19:20:16.405761 | orchestrator | ok: [testbed-manager] 2025-06-22 19:20:16.405815 | orchestrator | 2025-06-22 19:20:16.405823 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-06-22 19:20:16.459047 | orchestrator | ok: [testbed-manager] 2025-06-22 19:20:16.459099 | orchestrator | 2025-06-22 19:20:16.459106 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-06-22 19:20:16.502904 | orchestrator | included: /home/zuul-testbed04/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-06-22 19:20:16.502963 | orchestrator | 2025-06-22 19:20:16.502974 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-06-22 19:20:17.220776 | orchestrator | ok: [testbed-manager] 2025-06-22 19:20:17.220860 | orchestrator | 2025-06-22 19:20:17.220880 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-06-22 19:20:17.265216 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:20:17.265271 | orchestrator | 2025-06-22 19:20:17.265278 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-06-22 19:20:18.649985 | orchestrator | changed: [testbed-manager] 2025-06-22 19:20:18.650117 | orchestrator | 2025-06-22 19:20:18.650140 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-06-22 19:20:19.243407 | orchestrator | ok: [testbed-manager] 2025-06-22 19:20:19.243510 | orchestrator | 2025-06-22 19:20:19.243527 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-06-22 19:20:20.407965 | orchestrator | changed: [testbed-manager] 2025-06-22 19:20:20.408048 | orchestrator | 2025-06-22 19:20:20.408066 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-06-22 19:20:33.979954 | orchestrator | changed: [testbed-manager] 2025-06-22 19:20:33.980032 | orchestrator | 2025-06-22 19:20:33.980049 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-06-22 19:20:34.671167 | orchestrator | ok: [testbed-manager] 2025-06-22 19:20:34.671252 | orchestrator | 2025-06-22 19:20:34.671269 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-06-22 19:20:34.724770 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:20:34.724853 | orchestrator | 2025-06-22 19:20:34.724878 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-06-22 19:20:35.735041 | orchestrator | changed: [testbed-manager] 2025-06-22 19:20:35.735129 | orchestrator | 2025-06-22 19:20:35.735145 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-06-22 19:20:36.723681 | orchestrator | changed: [testbed-manager] 2025-06-22 19:20:36.723726 | orchestrator | 2025-06-22 19:20:36.723735 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-06-22 19:20:37.319734 | orchestrator | changed: [testbed-manager] 2025-06-22 19:20:37.319818 | orchestrator | 2025-06-22 19:20:37.319835 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-06-22 19:20:37.361043 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-06-22 19:20:37.361105 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-06-22 19:20:37.361112 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-06-22 19:20:37.361117 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-06-22 19:20:45.138935 | orchestrator | changed: [testbed-manager] 2025-06-22 19:20:45.139041 | orchestrator | 2025-06-22 19:20:45.139059 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-06-22 19:20:54.319681 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-06-22 19:20:54.319792 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-06-22 19:20:54.319812 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-06-22 19:20:54.319824 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-06-22 19:20:54.319843 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-06-22 19:20:54.319854 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-06-22 19:20:54.319866 | orchestrator | 2025-06-22 19:20:54.319878 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-06-22 19:20:55.393296 | orchestrator | changed: [testbed-manager] 2025-06-22 19:20:55.393352 | orchestrator | 2025-06-22 19:20:55.393360 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-06-22 19:20:55.436539 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:20:55.436591 | orchestrator | 2025-06-22 19:20:55.436600 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-06-22 19:20:58.677287 | orchestrator | changed: [testbed-manager] 2025-06-22 19:20:58.677350 | orchestrator | 2025-06-22 19:20:58.677365 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-06-22 19:20:58.723846 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:20:58.724039 | orchestrator | 2025-06-22 19:20:58.724053 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-06-22 19:22:40.400228 | orchestrator | changed: [testbed-manager] 2025-06-22 19:22:40.400341 | orchestrator | 2025-06-22 19:22:40.400371 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-06-22 19:22:41.544819 | orchestrator | ok: [testbed-manager] 2025-06-22 19:22:41.544854 | orchestrator | 2025-06-22 19:22:41.544861 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:22:41.544867 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-06-22 19:22:41.544873 | orchestrator | 2025-06-22 19:22:41.724818 | orchestrator | ok: Runtime: 0:02:29.224033 2025-06-22 19:22:41.739969 | 2025-06-22 19:22:41.740109 | TASK [Reboot manager] 2025-06-22 19:22:43.276525 | orchestrator | ok: Runtime: 0:00:01.017409 2025-06-22 19:22:43.296275 | 2025-06-22 19:22:43.296418 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-06-22 19:22:58.299283 | orchestrator | ok 2025-06-22 19:22:58.309984 | 2025-06-22 19:22:58.310128 | TASK [Wait a little longer for the manager so that everything is ready] 2025-06-22 19:23:58.358640 | orchestrator | ok 2025-06-22 19:23:58.369070 | 2025-06-22 19:23:58.369181 | TASK [Deploy manager + bootstrap nodes] 2025-06-22 19:24:00.952229 | orchestrator | 2025-06-22 19:24:00.952417 | orchestrator | # DEPLOY MANAGER 2025-06-22 19:24:00.952443 | orchestrator | 2025-06-22 19:24:00.952458 | orchestrator | + set -e 2025-06-22 19:24:00.952509 | orchestrator | + echo 2025-06-22 19:24:00.952526 | orchestrator | + echo '# DEPLOY MANAGER' 2025-06-22 19:24:00.952544 | orchestrator | + echo 2025-06-22 19:24:00.952596 | orchestrator | + cat /opt/manager-vars.sh 2025-06-22 19:24:00.957166 | orchestrator | export NUMBER_OF_NODES=6 2025-06-22 19:24:00.957199 | orchestrator | 2025-06-22 19:24:00.957212 | orchestrator | export CEPH_VERSION=reef 2025-06-22 19:24:00.957225 | orchestrator | export CONFIGURATION_VERSION=main 2025-06-22 19:24:00.957242 | orchestrator | export MANAGER_VERSION=9.1.0 2025-06-22 19:24:00.957275 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-06-22 19:24:00.957332 | orchestrator | 2025-06-22 19:24:00.957353 | orchestrator | export ARA=false 2025-06-22 19:24:00.957364 | orchestrator | export DEPLOY_MODE=manager 2025-06-22 19:24:00.957382 | orchestrator | export TEMPEST=false 2025-06-22 19:24:00.957393 | orchestrator | export IS_ZUUL=true 2025-06-22 19:24:00.957404 | orchestrator | 2025-06-22 19:24:00.957422 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.14 2025-06-22 19:24:00.957434 | orchestrator | export EXTERNAL_API=false 2025-06-22 19:24:00.957444 | orchestrator | 2025-06-22 19:24:00.957455 | orchestrator | export IMAGE_USER=ubuntu 2025-06-22 19:24:00.957469 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-06-22 19:24:00.957535 | orchestrator | 2025-06-22 19:24:00.957546 | orchestrator | export CEPH_STACK=ceph-ansible 2025-06-22 19:24:00.957564 | orchestrator | 2025-06-22 19:24:00.957575 | orchestrator | + echo 2025-06-22 19:24:00.957588 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-22 19:24:00.958619 | orchestrator | ++ export INTERACTIVE=false 2025-06-22 19:24:00.958674 | orchestrator | ++ INTERACTIVE=false 2025-06-22 19:24:00.958688 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-22 19:24:00.958714 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-22 19:24:00.958743 | orchestrator | + source /opt/manager-vars.sh 2025-06-22 19:24:00.958763 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-06-22 19:24:00.958817 | orchestrator | ++ NUMBER_OF_NODES=6 2025-06-22 19:24:00.958838 | orchestrator | ++ export CEPH_VERSION=reef 2025-06-22 19:24:00.958849 | orchestrator | ++ CEPH_VERSION=reef 2025-06-22 19:24:00.958860 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-06-22 19:24:00.958872 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-06-22 19:24:00.958882 | orchestrator | ++ export MANAGER_VERSION=9.1.0 2025-06-22 19:24:00.958893 | orchestrator | ++ MANAGER_VERSION=9.1.0 2025-06-22 19:24:00.958904 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-06-22 19:24:00.958924 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-06-22 19:24:00.958935 | orchestrator | ++ export ARA=false 2025-06-22 19:24:00.958955 | orchestrator | ++ ARA=false 2025-06-22 19:24:00.958974 | orchestrator | ++ export DEPLOY_MODE=manager 2025-06-22 19:24:00.958987 | orchestrator | ++ DEPLOY_MODE=manager 2025-06-22 19:24:00.958998 | orchestrator | ++ export TEMPEST=false 2025-06-22 19:24:00.959013 | orchestrator | ++ TEMPEST=false 2025-06-22 19:24:00.959024 | orchestrator | ++ export IS_ZUUL=true 2025-06-22 19:24:00.959035 | orchestrator | ++ IS_ZUUL=true 2025-06-22 19:24:00.959046 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.14 2025-06-22 19:24:00.959057 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.14 2025-06-22 19:24:00.959068 | orchestrator | ++ export EXTERNAL_API=false 2025-06-22 19:24:00.959079 | orchestrator | ++ EXTERNAL_API=false 2025-06-22 19:24:00.959089 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-06-22 19:24:00.959100 | orchestrator | ++ IMAGE_USER=ubuntu 2025-06-22 19:24:00.959111 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-06-22 19:24:00.959122 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-06-22 19:24:00.959133 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-06-22 19:24:00.959143 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-06-22 19:24:00.959154 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-06-22 19:24:01.020130 | orchestrator | + docker version 2025-06-22 19:24:01.309713 | orchestrator | Client: Docker Engine - Community 2025-06-22 19:24:01.309812 | orchestrator | Version: 27.5.1 2025-06-22 19:24:01.309830 | orchestrator | API version: 1.47 2025-06-22 19:24:01.309842 | orchestrator | Go version: go1.22.11 2025-06-22 19:24:01.309852 | orchestrator | Git commit: 9f9e405 2025-06-22 19:24:01.309864 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-06-22 19:24:01.309876 | orchestrator | OS/Arch: linux/amd64 2025-06-22 19:24:01.309887 | orchestrator | Context: default 2025-06-22 19:24:01.309897 | orchestrator | 2025-06-22 19:24:01.309909 | orchestrator | Server: Docker Engine - Community 2025-06-22 19:24:01.309919 | orchestrator | Engine: 2025-06-22 19:24:01.309931 | orchestrator | Version: 27.5.1 2025-06-22 19:24:01.309942 | orchestrator | API version: 1.47 (minimum version 1.24) 2025-06-22 19:24:01.309982 | orchestrator | Go version: go1.22.11 2025-06-22 19:24:01.309994 | orchestrator | Git commit: 4c9b3b0 2025-06-22 19:24:01.310004 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-06-22 19:24:01.310072 | orchestrator | OS/Arch: linux/amd64 2025-06-22 19:24:01.310086 | orchestrator | Experimental: false 2025-06-22 19:24:01.310097 | orchestrator | containerd: 2025-06-22 19:24:01.310108 | orchestrator | Version: 1.7.27 2025-06-22 19:24:01.310120 | orchestrator | GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da 2025-06-22 19:24:01.310131 | orchestrator | runc: 2025-06-22 19:24:01.310142 | orchestrator | Version: 1.2.5 2025-06-22 19:24:01.310153 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-06-22 19:24:01.310164 | orchestrator | docker-init: 2025-06-22 19:24:01.310175 | orchestrator | Version: 0.19.0 2025-06-22 19:24:01.310187 | orchestrator | GitCommit: de40ad0 2025-06-22 19:24:01.314947 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-06-22 19:24:01.326073 | orchestrator | + set -e 2025-06-22 19:24:01.326147 | orchestrator | + source /opt/manager-vars.sh 2025-06-22 19:24:01.326162 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-06-22 19:24:01.326173 | orchestrator | ++ NUMBER_OF_NODES=6 2025-06-22 19:24:01.326184 | orchestrator | ++ export CEPH_VERSION=reef 2025-06-22 19:24:01.326195 | orchestrator | ++ CEPH_VERSION=reef 2025-06-22 19:24:01.326206 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-06-22 19:24:01.326218 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-06-22 19:24:01.326229 | orchestrator | ++ export MANAGER_VERSION=9.1.0 2025-06-22 19:24:01.326240 | orchestrator | ++ MANAGER_VERSION=9.1.0 2025-06-22 19:24:01.326251 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-06-22 19:24:01.326262 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-06-22 19:24:01.326272 | orchestrator | ++ export ARA=false 2025-06-22 19:24:01.326284 | orchestrator | ++ ARA=false 2025-06-22 19:24:01.326294 | orchestrator | ++ export DEPLOY_MODE=manager 2025-06-22 19:24:01.326305 | orchestrator | ++ DEPLOY_MODE=manager 2025-06-22 19:24:01.326316 | orchestrator | ++ export TEMPEST=false 2025-06-22 19:24:01.326326 | orchestrator | ++ TEMPEST=false 2025-06-22 19:24:01.326337 | orchestrator | ++ export IS_ZUUL=true 2025-06-22 19:24:01.326348 | orchestrator | ++ IS_ZUUL=true 2025-06-22 19:24:01.326358 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.14 2025-06-22 19:24:01.326385 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.14 2025-06-22 19:24:01.326408 | orchestrator | ++ export EXTERNAL_API=false 2025-06-22 19:24:01.326418 | orchestrator | ++ EXTERNAL_API=false 2025-06-22 19:24:01.326429 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-06-22 19:24:01.326439 | orchestrator | ++ IMAGE_USER=ubuntu 2025-06-22 19:24:01.326450 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-06-22 19:24:01.326461 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-06-22 19:24:01.326496 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-06-22 19:24:01.326516 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-06-22 19:24:01.326535 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-22 19:24:01.326554 | orchestrator | ++ export INTERACTIVE=false 2025-06-22 19:24:01.326573 | orchestrator | ++ INTERACTIVE=false 2025-06-22 19:24:01.326585 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-22 19:24:01.326600 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-22 19:24:01.326622 | orchestrator | + [[ 9.1.0 != \l\a\t\e\s\t ]] 2025-06-22 19:24:01.326633 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 9.1.0 2025-06-22 19:24:01.335869 | orchestrator | + set -e 2025-06-22 19:24:01.335922 | orchestrator | + VERSION=9.1.0 2025-06-22 19:24:01.335941 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 9.1.0/g' /opt/configuration/environments/manager/configuration.yml 2025-06-22 19:24:01.346131 | orchestrator | + [[ 9.1.0 != \l\a\t\e\s\t ]] 2025-06-22 19:24:01.346164 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2025-06-22 19:24:01.349612 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2025-06-22 19:24:01.352763 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2025-06-22 19:24:01.363088 | orchestrator | /opt/configuration ~ 2025-06-22 19:24:01.363136 | orchestrator | + set -e 2025-06-22 19:24:01.363148 | orchestrator | + pushd /opt/configuration 2025-06-22 19:24:01.363159 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-06-22 19:24:01.365890 | orchestrator | + source /opt/venv/bin/activate 2025-06-22 19:24:01.368377 | orchestrator | ++ deactivate nondestructive 2025-06-22 19:24:01.368401 | orchestrator | ++ '[' -n '' ']' 2025-06-22 19:24:01.368415 | orchestrator | ++ '[' -n '' ']' 2025-06-22 19:24:01.368454 | orchestrator | ++ hash -r 2025-06-22 19:24:01.368466 | orchestrator | ++ '[' -n '' ']' 2025-06-22 19:24:01.368523 | orchestrator | ++ unset VIRTUAL_ENV 2025-06-22 19:24:01.368542 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-06-22 19:24:01.368560 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-06-22 19:24:01.368572 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-06-22 19:24:01.368583 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-06-22 19:24:01.368593 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-06-22 19:24:01.368604 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-06-22 19:24:01.368616 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-06-22 19:24:01.368628 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-06-22 19:24:01.368638 | orchestrator | ++ export PATH 2025-06-22 19:24:01.368650 | orchestrator | ++ '[' -n '' ']' 2025-06-22 19:24:01.368660 | orchestrator | ++ '[' -z '' ']' 2025-06-22 19:24:01.368671 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-06-22 19:24:01.368681 | orchestrator | ++ PS1='(venv) ' 2025-06-22 19:24:01.368692 | orchestrator | ++ export PS1 2025-06-22 19:24:01.368703 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-06-22 19:24:01.368714 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-06-22 19:24:01.368724 | orchestrator | ++ hash -r 2025-06-22 19:24:01.368735 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2025-06-22 19:24:02.524584 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2025-06-22 19:24:02.526192 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.4) 2025-06-22 19:24:02.527804 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2025-06-22 19:24:02.529516 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.2) 2025-06-22 19:24:02.531184 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (25.0) 2025-06-22 19:24:02.559784 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.2.1) 2025-06-22 19:24:02.564779 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2025-06-22 19:24:02.568072 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.19) 2025-06-22 19:24:02.572034 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2025-06-22 19:24:02.636254 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.2) 2025-06-22 19:24:02.638957 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.10) 2025-06-22 19:24:02.641807 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.5.0) 2025-06-22 19:24:02.643669 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2025.6.15) 2025-06-22 19:24:02.649390 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.2) 2025-06-22 19:24:02.875358 | orchestrator | ++ which gilt 2025-06-22 19:24:02.880943 | orchestrator | + GILT=/opt/venv/bin/gilt 2025-06-22 19:24:02.945798 | orchestrator | + /opt/venv/bin/gilt overlay 2025-06-22 19:24:03.179277 | orchestrator | osism.cfg-generics: 2025-06-22 19:24:03.371431 | orchestrator | - copied (v0.20250530.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2025-06-22 19:24:03.371589 | orchestrator | - copied (v0.20250530.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2025-06-22 19:24:03.372247 | orchestrator | - copied (v0.20250530.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2025-06-22 19:24:03.372341 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2025-06-22 19:24:03.998453 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2025-06-22 19:24:04.012208 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2025-06-22 19:24:04.395089 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2025-06-22 19:24:04.451169 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-06-22 19:24:04.451254 | orchestrator | + deactivate 2025-06-22 19:24:04.451270 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-06-22 19:24:04.451284 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-06-22 19:24:04.451296 | orchestrator | + export PATH 2025-06-22 19:24:04.451307 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-06-22 19:24:04.451319 | orchestrator | + '[' -n '' ']' 2025-06-22 19:24:04.451332 | orchestrator | + hash -r 2025-06-22 19:24:04.451343 | orchestrator | + '[' -n '' ']' 2025-06-22 19:24:04.451354 | orchestrator | + unset VIRTUAL_ENV 2025-06-22 19:24:04.451365 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-06-22 19:24:04.451376 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-06-22 19:24:04.451387 | orchestrator | + unset -f deactivate 2025-06-22 19:24:04.451398 | orchestrator | + popd 2025-06-22 19:24:04.451410 | orchestrator | ~ 2025-06-22 19:24:04.453157 | orchestrator | + [[ 9.1.0 == \l\a\t\e\s\t ]] 2025-06-22 19:24:04.453200 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-06-22 19:24:04.454283 | orchestrator | ++ semver 9.1.0 7.0.0 2025-06-22 19:24:04.524707 | orchestrator | + [[ 1 -ge 0 ]] 2025-06-22 19:24:04.524806 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-06-22 19:24:04.524823 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-06-22 19:24:04.630272 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-06-22 19:24:04.630381 | orchestrator | + source /opt/venv/bin/activate 2025-06-22 19:24:04.630395 | orchestrator | ++ deactivate nondestructive 2025-06-22 19:24:04.630407 | orchestrator | ++ '[' -n '' ']' 2025-06-22 19:24:04.630418 | orchestrator | ++ '[' -n '' ']' 2025-06-22 19:24:04.630429 | orchestrator | ++ hash -r 2025-06-22 19:24:04.630441 | orchestrator | ++ '[' -n '' ']' 2025-06-22 19:24:04.630452 | orchestrator | ++ unset VIRTUAL_ENV 2025-06-22 19:24:04.630462 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-06-22 19:24:04.630507 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-06-22 19:24:04.630521 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-06-22 19:24:04.630531 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-06-22 19:24:04.630542 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-06-22 19:24:04.630553 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-06-22 19:24:04.630565 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-06-22 19:24:04.630577 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-06-22 19:24:04.630613 | orchestrator | ++ export PATH 2025-06-22 19:24:04.630625 | orchestrator | ++ '[' -n '' ']' 2025-06-22 19:24:04.630636 | orchestrator | ++ '[' -z '' ']' 2025-06-22 19:24:04.630646 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-06-22 19:24:04.630657 | orchestrator | ++ PS1='(venv) ' 2025-06-22 19:24:04.630668 | orchestrator | ++ export PS1 2025-06-22 19:24:04.630679 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-06-22 19:24:04.630689 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-06-22 19:24:04.630700 | orchestrator | ++ hash -r 2025-06-22 19:24:04.630722 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-06-22 19:24:05.869758 | orchestrator | 2025-06-22 19:24:05.869863 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-06-22 19:24:05.869881 | orchestrator | 2025-06-22 19:24:05.869893 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-06-22 19:24:06.473676 | orchestrator | ok: [testbed-manager] 2025-06-22 19:24:06.473767 | orchestrator | 2025-06-22 19:24:06.473780 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-06-22 19:24:07.572525 | orchestrator | changed: [testbed-manager] 2025-06-22 19:24:07.572615 | orchestrator | 2025-06-22 19:24:07.572625 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-06-22 19:24:07.572635 | orchestrator | 2025-06-22 19:24:07.572643 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-22 19:24:10.083122 | orchestrator | ok: [testbed-manager] 2025-06-22 19:24:10.083238 | orchestrator | 2025-06-22 19:24:10.083255 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-06-22 19:24:10.142708 | orchestrator | ok: [testbed-manager] 2025-06-22 19:24:10.142810 | orchestrator | 2025-06-22 19:24:10.142826 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-06-22 19:24:10.633455 | orchestrator | changed: [testbed-manager] 2025-06-22 19:24:10.633580 | orchestrator | 2025-06-22 19:24:10.633600 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2025-06-22 19:24:10.675977 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:24:10.676075 | orchestrator | 2025-06-22 19:24:10.676090 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-06-22 19:24:11.072337 | orchestrator | changed: [testbed-manager] 2025-06-22 19:24:11.072435 | orchestrator | 2025-06-22 19:24:11.072451 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-06-22 19:24:11.130357 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:24:11.130452 | orchestrator | 2025-06-22 19:24:11.130468 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-06-22 19:24:11.458741 | orchestrator | ok: [testbed-manager] 2025-06-22 19:24:11.458833 | orchestrator | 2025-06-22 19:24:11.458849 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-06-22 19:24:11.579860 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:24:11.579964 | orchestrator | 2025-06-22 19:24:11.579980 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2025-06-22 19:24:11.579993 | orchestrator | 2025-06-22 19:24:11.580004 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-22 19:24:13.527644 | orchestrator | ok: [testbed-manager] 2025-06-22 19:24:13.527717 | orchestrator | 2025-06-22 19:24:13.527724 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-06-22 19:24:13.625588 | orchestrator | included: osism.services.traefik for testbed-manager 2025-06-22 19:24:13.625689 | orchestrator | 2025-06-22 19:24:13.625705 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-06-22 19:24:13.682982 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-06-22 19:24:13.683086 | orchestrator | 2025-06-22 19:24:13.683102 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-06-22 19:24:14.875565 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-06-22 19:24:14.875665 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-06-22 19:24:14.875682 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-06-22 19:24:14.875694 | orchestrator | 2025-06-22 19:24:14.875707 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-06-22 19:24:16.762881 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-06-22 19:24:16.762966 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-06-22 19:24:16.762973 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-06-22 19:24:16.762979 | orchestrator | 2025-06-22 19:24:16.762984 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-06-22 19:24:17.458597 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-22 19:24:17.458712 | orchestrator | changed: [testbed-manager] 2025-06-22 19:24:17.458738 | orchestrator | 2025-06-22 19:24:17.458759 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-06-22 19:24:18.138404 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-22 19:24:18.138536 | orchestrator | changed: [testbed-manager] 2025-06-22 19:24:18.138554 | orchestrator | 2025-06-22 19:24:18.138567 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-06-22 19:24:18.202971 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:24:18.203083 | orchestrator | 2025-06-22 19:24:18.203101 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-06-22 19:24:18.567179 | orchestrator | ok: [testbed-manager] 2025-06-22 19:24:18.567278 | orchestrator | 2025-06-22 19:24:18.567295 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-06-22 19:24:18.632870 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-06-22 19:24:18.632988 | orchestrator | 2025-06-22 19:24:18.633006 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-06-22 19:24:19.770104 | orchestrator | changed: [testbed-manager] 2025-06-22 19:24:19.770195 | orchestrator | 2025-06-22 19:24:19.770211 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-06-22 19:24:20.603177 | orchestrator | changed: [testbed-manager] 2025-06-22 19:24:20.603274 | orchestrator | 2025-06-22 19:24:20.603290 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-06-22 19:24:32.244624 | orchestrator | changed: [testbed-manager] 2025-06-22 19:24:32.244750 | orchestrator | 2025-06-22 19:24:32.244816 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-06-22 19:24:32.295919 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:24:32.296000 | orchestrator | 2025-06-22 19:24:32.296014 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-06-22 19:24:32.296027 | orchestrator | 2025-06-22 19:24:32.296039 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-22 19:24:34.239408 | orchestrator | ok: [testbed-manager] 2025-06-22 19:24:34.239532 | orchestrator | 2025-06-22 19:24:34.239549 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-06-22 19:24:34.354763 | orchestrator | included: osism.services.manager for testbed-manager 2025-06-22 19:24:34.354852 | orchestrator | 2025-06-22 19:24:34.354868 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-06-22 19:24:34.430720 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-06-22 19:24:34.430817 | orchestrator | 2025-06-22 19:24:34.430833 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-06-22 19:24:37.091456 | orchestrator | ok: [testbed-manager] 2025-06-22 19:24:37.091615 | orchestrator | 2025-06-22 19:24:37.091634 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-06-22 19:24:37.149740 | orchestrator | ok: [testbed-manager] 2025-06-22 19:24:37.149830 | orchestrator | 2025-06-22 19:24:37.149844 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-06-22 19:24:37.277429 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-06-22 19:24:37.277584 | orchestrator | 2025-06-22 19:24:37.277601 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-06-22 19:24:40.124685 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-06-22 19:24:40.124773 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-06-22 19:24:40.124783 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-06-22 19:24:40.124791 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-06-22 19:24:40.124799 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-06-22 19:24:40.124807 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-06-22 19:24:40.124814 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-06-22 19:24:40.124821 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-06-22 19:24:40.124829 | orchestrator | 2025-06-22 19:24:40.124842 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2025-06-22 19:24:40.773593 | orchestrator | changed: [testbed-manager] 2025-06-22 19:24:40.773692 | orchestrator | 2025-06-22 19:24:40.773708 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-06-22 19:24:41.427279 | orchestrator | changed: [testbed-manager] 2025-06-22 19:24:41.427378 | orchestrator | 2025-06-22 19:24:41.427393 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-06-22 19:24:41.516504 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-06-22 19:24:41.516588 | orchestrator | 2025-06-22 19:24:41.516602 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-06-22 19:24:42.767018 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-06-22 19:24:42.767125 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-06-22 19:24:42.767141 | orchestrator | 2025-06-22 19:24:42.767154 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-06-22 19:24:43.399578 | orchestrator | changed: [testbed-manager] 2025-06-22 19:24:43.399677 | orchestrator | 2025-06-22 19:24:43.399692 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-06-22 19:24:43.466763 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:24:43.466850 | orchestrator | 2025-06-22 19:24:43.466865 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-06-22 19:24:43.532130 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-06-22 19:24:43.532226 | orchestrator | 2025-06-22 19:24:43.532241 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-06-22 19:24:44.938260 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-22 19:24:44.938369 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-22 19:24:44.938383 | orchestrator | changed: [testbed-manager] 2025-06-22 19:24:44.938398 | orchestrator | 2025-06-22 19:24:44.938410 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-06-22 19:24:45.593266 | orchestrator | changed: [testbed-manager] 2025-06-22 19:24:45.593369 | orchestrator | 2025-06-22 19:24:45.593387 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-06-22 19:24:45.657122 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:24:45.657244 | orchestrator | 2025-06-22 19:24:45.657270 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-06-22 19:24:45.764424 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-06-22 19:24:45.764551 | orchestrator | 2025-06-22 19:24:45.764567 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-06-22 19:24:46.315783 | orchestrator | changed: [testbed-manager] 2025-06-22 19:24:46.315850 | orchestrator | 2025-06-22 19:24:46.315858 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-06-22 19:24:46.711780 | orchestrator | changed: [testbed-manager] 2025-06-22 19:24:46.711859 | orchestrator | 2025-06-22 19:24:46.711870 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-06-22 19:24:47.977107 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-06-22 19:24:47.977257 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-06-22 19:24:47.977274 | orchestrator | 2025-06-22 19:24:47.977289 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-06-22 19:24:48.635594 | orchestrator | changed: [testbed-manager] 2025-06-22 19:24:48.635698 | orchestrator | 2025-06-22 19:24:48.635714 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-06-22 19:24:49.055413 | orchestrator | ok: [testbed-manager] 2025-06-22 19:24:49.055554 | orchestrator | 2025-06-22 19:24:49.055571 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-06-22 19:24:49.429826 | orchestrator | changed: [testbed-manager] 2025-06-22 19:24:49.429921 | orchestrator | 2025-06-22 19:24:49.429935 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-06-22 19:24:49.483775 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:24:49.483862 | orchestrator | 2025-06-22 19:24:49.483876 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-06-22 19:24:49.577672 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-06-22 19:24:49.577765 | orchestrator | 2025-06-22 19:24:49.577780 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-06-22 19:24:49.631819 | orchestrator | ok: [testbed-manager] 2025-06-22 19:24:49.631902 | orchestrator | 2025-06-22 19:24:49.631916 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-06-22 19:24:51.692243 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-06-22 19:24:51.692336 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-06-22 19:24:51.692345 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-06-22 19:24:51.692350 | orchestrator | 2025-06-22 19:24:51.692356 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-06-22 19:24:52.441586 | orchestrator | changed: [testbed-manager] 2025-06-22 19:24:52.441674 | orchestrator | 2025-06-22 19:24:52.441687 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-06-22 19:24:53.180473 | orchestrator | changed: [testbed-manager] 2025-06-22 19:24:53.180683 | orchestrator | 2025-06-22 19:24:53.180699 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-06-22 19:24:53.906419 | orchestrator | changed: [testbed-manager] 2025-06-22 19:24:53.906597 | orchestrator | 2025-06-22 19:24:53.906618 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-06-22 19:24:53.983895 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-06-22 19:24:53.983989 | orchestrator | 2025-06-22 19:24:53.984004 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-06-22 19:24:54.040565 | orchestrator | ok: [testbed-manager] 2025-06-22 19:24:54.040654 | orchestrator | 2025-06-22 19:24:54.040669 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-06-22 19:24:54.787247 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-06-22 19:24:54.787350 | orchestrator | 2025-06-22 19:24:54.787367 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-06-22 19:24:54.883738 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-06-22 19:24:54.883823 | orchestrator | 2025-06-22 19:24:54.883838 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-06-22 19:24:55.636849 | orchestrator | changed: [testbed-manager] 2025-06-22 19:24:55.636980 | orchestrator | 2025-06-22 19:24:55.637008 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-06-22 19:24:56.271387 | orchestrator | ok: [testbed-manager] 2025-06-22 19:24:56.271484 | orchestrator | 2025-06-22 19:24:56.271542 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-06-22 19:24:56.327842 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:24:56.327917 | orchestrator | 2025-06-22 19:24:56.327933 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-06-22 19:24:56.391801 | orchestrator | ok: [testbed-manager] 2025-06-22 19:24:56.391887 | orchestrator | 2025-06-22 19:24:56.391900 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-06-22 19:24:57.213983 | orchestrator | changed: [testbed-manager] 2025-06-22 19:24:57.214131 | orchestrator | 2025-06-22 19:24:57.214148 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-06-22 19:26:03.338077 | orchestrator | changed: [testbed-manager] 2025-06-22 19:26:03.338195 | orchestrator | 2025-06-22 19:26:03.338217 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-06-22 19:26:04.378284 | orchestrator | ok: [testbed-manager] 2025-06-22 19:26:04.378392 | orchestrator | 2025-06-22 19:26:04.378408 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2025-06-22 19:26:04.437482 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:26:04.437665 | orchestrator | 2025-06-22 19:26:04.437683 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-06-22 19:26:07.291190 | orchestrator | changed: [testbed-manager] 2025-06-22 19:26:07.291301 | orchestrator | 2025-06-22 19:26:07.291319 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-06-22 19:26:07.363432 | orchestrator | ok: [testbed-manager] 2025-06-22 19:26:07.363598 | orchestrator | 2025-06-22 19:26:07.363624 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-06-22 19:26:07.363645 | orchestrator | 2025-06-22 19:26:07.363664 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-06-22 19:26:07.427954 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:26:07.428058 | orchestrator | 2025-06-22 19:26:07.428103 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-06-22 19:27:07.483666 | orchestrator | Pausing for 60 seconds 2025-06-22 19:27:07.483785 | orchestrator | changed: [testbed-manager] 2025-06-22 19:27:07.483801 | orchestrator | 2025-06-22 19:27:07.483815 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-06-22 19:27:11.718480 | orchestrator | changed: [testbed-manager] 2025-06-22 19:27:11.718612 | orchestrator | 2025-06-22 19:27:11.718627 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-06-22 19:27:53.469390 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-06-22 19:27:53.469449 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-06-22 19:27:53.469455 | orchestrator | changed: [testbed-manager] 2025-06-22 19:27:53.469460 | orchestrator | 2025-06-22 19:27:53.469465 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-06-22 19:28:02.139428 | orchestrator | changed: [testbed-manager] 2025-06-22 19:28:02.139597 | orchestrator | 2025-06-22 19:28:02.139637 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-06-22 19:28:02.230651 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-06-22 19:28:02.230718 | orchestrator | 2025-06-22 19:28:02.230733 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-06-22 19:28:02.230745 | orchestrator | 2025-06-22 19:28:02.230756 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-06-22 19:28:02.297764 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:28:02.297850 | orchestrator | 2025-06-22 19:28:02.297865 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:28:02.297877 | orchestrator | testbed-manager : ok=64 changed=35 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-06-22 19:28:02.297889 | orchestrator | 2025-06-22 19:28:02.397122 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-06-22 19:28:02.397219 | orchestrator | + deactivate 2025-06-22 19:28:02.397234 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-06-22 19:28:02.397247 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-06-22 19:28:02.397259 | orchestrator | + export PATH 2025-06-22 19:28:02.397274 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-06-22 19:28:02.397287 | orchestrator | + '[' -n '' ']' 2025-06-22 19:28:02.397299 | orchestrator | + hash -r 2025-06-22 19:28:02.397310 | orchestrator | + '[' -n '' ']' 2025-06-22 19:28:02.397321 | orchestrator | + unset VIRTUAL_ENV 2025-06-22 19:28:02.397332 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-06-22 19:28:02.397343 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-06-22 19:28:02.397354 | orchestrator | + unset -f deactivate 2025-06-22 19:28:02.397366 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-06-22 19:28:02.405320 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-06-22 19:28:02.405365 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-06-22 19:28:02.405377 | orchestrator | + local max_attempts=60 2025-06-22 19:28:02.405388 | orchestrator | + local name=ceph-ansible 2025-06-22 19:28:02.405399 | orchestrator | + local attempt_num=1 2025-06-22 19:28:02.405914 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-22 19:28:02.438339 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-22 19:28:02.438385 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-06-22 19:28:02.438398 | orchestrator | + local max_attempts=60 2025-06-22 19:28:02.438409 | orchestrator | + local name=kolla-ansible 2025-06-22 19:28:02.438421 | orchestrator | + local attempt_num=1 2025-06-22 19:28:02.439585 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-06-22 19:28:02.484089 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-22 19:28:02.484145 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-06-22 19:28:02.484158 | orchestrator | + local max_attempts=60 2025-06-22 19:28:02.484170 | orchestrator | + local name=osism-ansible 2025-06-22 19:28:02.484182 | orchestrator | + local attempt_num=1 2025-06-22 19:28:02.485012 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-06-22 19:28:02.516100 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-22 19:28:02.516166 | orchestrator | + [[ true == \t\r\u\e ]] 2025-06-22 19:28:02.516178 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-06-22 19:28:03.270675 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-06-22 19:28:03.453406 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-06-22 19:28:03.453496 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20250530.0 "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2025-06-22 19:28:03.453507 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20250530.0 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2025-06-22 19:28:03.453515 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2025-06-22 19:28:03.453599 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2025-06-22 19:28:03.453607 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2025-06-22 19:28:03.453615 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2025-06-22 19:28:03.453622 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20250530.0 "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 52 seconds (healthy) 2025-06-22 19:28:03.453629 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2025-06-22 19:28:03.453636 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.7.2 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2025-06-22 19:28:03.453815 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2025-06-22 19:28:03.453823 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.4-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2025-06-22 19:28:03.453830 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20250531.0 "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2025-06-22 19:28:03.453837 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20250530.0 "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2025-06-22 19:28:03.453844 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2025-06-22 19:28:03.461780 | orchestrator | ++ semver 9.1.0 7.0.0 2025-06-22 19:28:03.497344 | orchestrator | + [[ 1 -ge 0 ]] 2025-06-22 19:28:03.497417 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-06-22 19:28:03.500377 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-06-22 19:28:05.242738 | orchestrator | Registering Redlock._acquired_script 2025-06-22 19:28:05.242862 | orchestrator | Registering Redlock._extend_script 2025-06-22 19:28:05.242877 | orchestrator | Registering Redlock._release_script 2025-06-22 19:28:05.452439 | orchestrator | 2025-06-22 19:28:05 | INFO  | Task d057fe15-a2b0-42c1-a840-80e8960dac80 (resolvconf) was prepared for execution. 2025-06-22 19:28:05.452535 | orchestrator | 2025-06-22 19:28:05 | INFO  | It takes a moment until task d057fe15-a2b0-42c1-a840-80e8960dac80 (resolvconf) has been started and output is visible here. 2025-06-22 19:28:09.129198 | orchestrator | 2025-06-22 19:28:09.129744 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-06-22 19:28:09.130870 | orchestrator | 2025-06-22 19:28:09.133176 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-22 19:28:09.134087 | orchestrator | Sunday 22 June 2025 19:28:09 +0000 (0:00:00.131) 0:00:00.131 *********** 2025-06-22 19:28:12.564746 | orchestrator | ok: [testbed-manager] 2025-06-22 19:28:12.565710 | orchestrator | 2025-06-22 19:28:12.566876 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-06-22 19:28:12.567706 | orchestrator | Sunday 22 June 2025 19:28:12 +0000 (0:00:03.438) 0:00:03.569 *********** 2025-06-22 19:28:12.626916 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:28:12.627217 | orchestrator | 2025-06-22 19:28:12.628130 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-06-22 19:28:12.629091 | orchestrator | Sunday 22 June 2025 19:28:12 +0000 (0:00:00.061) 0:00:03.631 *********** 2025-06-22 19:28:12.707534 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-06-22 19:28:12.707705 | orchestrator | 2025-06-22 19:28:12.708228 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-06-22 19:28:12.708867 | orchestrator | Sunday 22 June 2025 19:28:12 +0000 (0:00:00.079) 0:00:03.710 *********** 2025-06-22 19:28:12.772899 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-06-22 19:28:12.772988 | orchestrator | 2025-06-22 19:28:12.773283 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-06-22 19:28:12.774157 | orchestrator | Sunday 22 June 2025 19:28:12 +0000 (0:00:00.066) 0:00:03.777 *********** 2025-06-22 19:28:13.863981 | orchestrator | ok: [testbed-manager] 2025-06-22 19:28:13.865075 | orchestrator | 2025-06-22 19:28:13.866136 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-06-22 19:28:13.869390 | orchestrator | Sunday 22 June 2025 19:28:13 +0000 (0:00:01.090) 0:00:04.867 *********** 2025-06-22 19:28:13.928741 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:28:13.929114 | orchestrator | 2025-06-22 19:28:13.929410 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-06-22 19:28:13.930528 | orchestrator | Sunday 22 June 2025 19:28:13 +0000 (0:00:00.065) 0:00:04.933 *********** 2025-06-22 19:28:14.392063 | orchestrator | ok: [testbed-manager] 2025-06-22 19:28:14.392158 | orchestrator | 2025-06-22 19:28:14.392480 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-06-22 19:28:14.393825 | orchestrator | Sunday 22 June 2025 19:28:14 +0000 (0:00:00.462) 0:00:05.395 *********** 2025-06-22 19:28:14.467681 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:28:14.468001 | orchestrator | 2025-06-22 19:28:14.468521 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-06-22 19:28:14.469320 | orchestrator | Sunday 22 June 2025 19:28:14 +0000 (0:00:00.075) 0:00:05.470 *********** 2025-06-22 19:28:14.979114 | orchestrator | changed: [testbed-manager] 2025-06-22 19:28:14.979634 | orchestrator | 2025-06-22 19:28:14.979667 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-06-22 19:28:14.980269 | orchestrator | Sunday 22 June 2025 19:28:14 +0000 (0:00:00.511) 0:00:05.982 *********** 2025-06-22 19:28:15.974250 | orchestrator | changed: [testbed-manager] 2025-06-22 19:28:15.974607 | orchestrator | 2025-06-22 19:28:15.975853 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-06-22 19:28:15.976629 | orchestrator | Sunday 22 June 2025 19:28:15 +0000 (0:00:00.995) 0:00:06.978 *********** 2025-06-22 19:28:16.835184 | orchestrator | ok: [testbed-manager] 2025-06-22 19:28:16.836135 | orchestrator | 2025-06-22 19:28:16.837172 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-06-22 19:28:16.838290 | orchestrator | Sunday 22 June 2025 19:28:16 +0000 (0:00:00.860) 0:00:07.838 *********** 2025-06-22 19:28:16.915230 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-06-22 19:28:16.916194 | orchestrator | 2025-06-22 19:28:16.917296 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-06-22 19:28:16.918275 | orchestrator | Sunday 22 June 2025 19:28:16 +0000 (0:00:00.081) 0:00:07.920 *********** 2025-06-22 19:28:18.059868 | orchestrator | changed: [testbed-manager] 2025-06-22 19:28:18.060849 | orchestrator | 2025-06-22 19:28:18.060887 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:28:18.060918 | orchestrator | 2025-06-22 19:28:18 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-22 19:28:18.060932 | orchestrator | 2025-06-22 19:28:18 | INFO  | Please wait and do not abort execution. 2025-06-22 19:28:18.060944 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-22 19:28:18.061210 | orchestrator | 2025-06-22 19:28:18.062956 | orchestrator | 2025-06-22 19:28:18.063649 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:28:18.064387 | orchestrator | Sunday 22 June 2025 19:28:18 +0000 (0:00:01.143) 0:00:09.063 *********** 2025-06-22 19:28:18.066159 | orchestrator | =============================================================================== 2025-06-22 19:28:18.066218 | orchestrator | Gathering Facts --------------------------------------------------------- 3.44s 2025-06-22 19:28:18.066831 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.14s 2025-06-22 19:28:18.067414 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.09s 2025-06-22 19:28:18.068137 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.00s 2025-06-22 19:28:18.068705 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.86s 2025-06-22 19:28:18.069301 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.51s 2025-06-22 19:28:18.069898 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.46s 2025-06-22 19:28:18.070277 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.08s 2025-06-22 19:28:18.070612 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.08s 2025-06-22 19:28:18.070985 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2025-06-22 19:28:18.071312 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.07s 2025-06-22 19:28:18.071682 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.07s 2025-06-22 19:28:18.072466 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.06s 2025-06-22 19:28:18.490606 | orchestrator | + osism apply sshconfig 2025-06-22 19:28:20.123386 | orchestrator | Registering Redlock._acquired_script 2025-06-22 19:28:20.123490 | orchestrator | Registering Redlock._extend_script 2025-06-22 19:28:20.123513 | orchestrator | Registering Redlock._release_script 2025-06-22 19:28:20.182510 | orchestrator | 2025-06-22 19:28:20 | INFO  | Task ebcbb953-a18e-46f5-9907-69a0cd937a8b (sshconfig) was prepared for execution. 2025-06-22 19:28:20.182622 | orchestrator | 2025-06-22 19:28:20 | INFO  | It takes a moment until task ebcbb953-a18e-46f5-9907-69a0cd937a8b (sshconfig) has been started and output is visible here. 2025-06-22 19:28:24.147941 | orchestrator | 2025-06-22 19:28:24.149383 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-06-22 19:28:24.150680 | orchestrator | 2025-06-22 19:28:24.152060 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-06-22 19:28:24.152959 | orchestrator | Sunday 22 June 2025 19:28:24 +0000 (0:00:00.167) 0:00:00.167 *********** 2025-06-22 19:28:24.711957 | orchestrator | ok: [testbed-manager] 2025-06-22 19:28:24.712688 | orchestrator | 2025-06-22 19:28:24.713781 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-06-22 19:28:24.714564 | orchestrator | Sunday 22 June 2025 19:28:24 +0000 (0:00:00.566) 0:00:00.733 *********** 2025-06-22 19:28:25.219337 | orchestrator | changed: [testbed-manager] 2025-06-22 19:28:25.219894 | orchestrator | 2025-06-22 19:28:25.220610 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-06-22 19:28:25.221248 | orchestrator | Sunday 22 June 2025 19:28:25 +0000 (0:00:00.506) 0:00:01.239 *********** 2025-06-22 19:28:31.005231 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-06-22 19:28:31.005342 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-06-22 19:28:31.005503 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-06-22 19:28:31.006076 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-06-22 19:28:31.006352 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-06-22 19:28:31.007018 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-06-22 19:28:31.007311 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-06-22 19:28:31.009496 | orchestrator | 2025-06-22 19:28:31.009525 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-06-22 19:28:31.009586 | orchestrator | Sunday 22 June 2025 19:28:30 +0000 (0:00:05.784) 0:00:07.024 *********** 2025-06-22 19:28:31.081476 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:28:31.081546 | orchestrator | 2025-06-22 19:28:31.082853 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-06-22 19:28:31.083656 | orchestrator | Sunday 22 June 2025 19:28:31 +0000 (0:00:00.077) 0:00:07.102 *********** 2025-06-22 19:28:31.691193 | orchestrator | changed: [testbed-manager] 2025-06-22 19:28:31.691290 | orchestrator | 2025-06-22 19:28:31.692380 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:28:31.693803 | orchestrator | 2025-06-22 19:28:31 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-22 19:28:31.693846 | orchestrator | 2025-06-22 19:28:31 | INFO  | Please wait and do not abort execution. 2025-06-22 19:28:31.694736 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-22 19:28:31.695958 | orchestrator | 2025-06-22 19:28:31.697068 | orchestrator | 2025-06-22 19:28:31.698436 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:28:31.699768 | orchestrator | Sunday 22 June 2025 19:28:31 +0000 (0:00:00.609) 0:00:07.711 *********** 2025-06-22 19:28:31.700971 | orchestrator | =============================================================================== 2025-06-22 19:28:31.702216 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.78s 2025-06-22 19:28:31.702992 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.61s 2025-06-22 19:28:31.704114 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.57s 2025-06-22 19:28:31.705128 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.51s 2025-06-22 19:28:31.706281 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.08s 2025-06-22 19:28:32.130260 | orchestrator | + osism apply known-hosts 2025-06-22 19:28:33.781612 | orchestrator | Registering Redlock._acquired_script 2025-06-22 19:28:33.781748 | orchestrator | Registering Redlock._extend_script 2025-06-22 19:28:33.781763 | orchestrator | Registering Redlock._release_script 2025-06-22 19:28:33.836383 | orchestrator | 2025-06-22 19:28:33 | INFO  | Task 86270431-4127-4317-abf7-f286a99627cd (known-hosts) was prepared for execution. 2025-06-22 19:28:33.836475 | orchestrator | 2025-06-22 19:28:33 | INFO  | It takes a moment until task 86270431-4127-4317-abf7-f286a99627cd (known-hosts) has been started and output is visible here. 2025-06-22 19:28:37.924184 | orchestrator | 2025-06-22 19:28:37.925347 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-06-22 19:28:37.925768 | orchestrator | 2025-06-22 19:28:37.928505 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-06-22 19:28:37.929143 | orchestrator | Sunday 22 June 2025 19:28:37 +0000 (0:00:00.165) 0:00:00.165 *********** 2025-06-22 19:28:44.068290 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-06-22 19:28:44.068521 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-06-22 19:28:44.069755 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-06-22 19:28:44.070734 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-06-22 19:28:44.071761 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-06-22 19:28:44.072515 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-06-22 19:28:44.073477 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-06-22 19:28:44.074216 | orchestrator | 2025-06-22 19:28:44.074852 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-06-22 19:28:44.075619 | orchestrator | Sunday 22 June 2025 19:28:44 +0000 (0:00:06.144) 0:00:06.310 *********** 2025-06-22 19:28:44.245453 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-06-22 19:28:44.246737 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-06-22 19:28:44.246780 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-06-22 19:28:44.247807 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-06-22 19:28:44.249372 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-06-22 19:28:44.252007 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-06-22 19:28:44.253048 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-06-22 19:28:44.254433 | orchestrator | 2025-06-22 19:28:44.255477 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-22 19:28:44.256392 | orchestrator | Sunday 22 June 2025 19:28:44 +0000 (0:00:00.178) 0:00:06.488 *********** 2025-06-22 19:28:45.477741 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMrp8qIrGYq/IsDyaLjIlh7KAl1H3HB4v7L/aKSUZy2k) 2025-06-22 19:28:45.479119 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCgrBgXd3/cFBbXIo2wqTXxBjMeyw6qLkkDmWsE/DOzWPK/xndhvYB6gG4P5zrzavhMzP7C98BTNrD7T8+M6g4pOXhzsDOvFQzl21qLUeaH9+rVHYuWVbo4sVfsL98nBi7srpwoKskhUAmMT7hwIAgKPtRClmXrgls7Tap2Wu1RZ8ydZt7crtqNLrJzamGL/9fOalvMLWlP1DWBKOZ7yYCkxpcNbRl87GG+zt2zl8YhpORyeAB+jFAZOS7B2pQcMTHKXVN+dK+x8uGuTG7mlecbF1lYbXfMkki8C41jF1b9Qk5kS/I5x/UVta0/QmmaYYOvvytwTi1c16g2SFmwzHdrgjVpVEF2LvdEHTg4JqKIRVxrzI5vv8V4Mx+AgbPVz9/KdZ1DgGOgJgiFLUjjV1KXyTW/fcydXv+bgsqylinSNuxRwo3TFnZhaRcE0SZYepOwPwtnsXLu+KKw0eLEQQXBUH0E3eTRtptvBA19SUtEiUomKs6HxC+ythpPPUAGnt0=) 2025-06-22 19:28:45.480242 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJfqQpdt9uGhEsOunxEqrNlw509YIUj+3yl2GcXsVwkOy7Fof0eECFEZH2PBEQJnclSpy7evV+eLpjQUscwhlKQ=) 2025-06-22 19:28:45.481738 | orchestrator | 2025-06-22 19:28:45.483001 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-22 19:28:45.483761 | orchestrator | Sunday 22 June 2025 19:28:45 +0000 (0:00:01.232) 0:00:07.721 *********** 2025-06-22 19:28:46.547813 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOeEDyldXsk5Mq+hopVkVocSyOLlZsXpLOwNMGOyT8XOKd/TAepXdGpQyHrNv8BL3CNwETl0HQ+BPLDFq6iYqZ8=) 2025-06-22 19:28:46.548057 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINKpp9kwoKDAOrN3p03uVWi6O3ZUuj5XX2a8XiQSN88c) 2025-06-22 19:28:46.549417 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDgQXBqjiuuar5eyP8GcuyyL1Qp4aSNnqkS3FhDEduxI/cC0od5yWzEX40SbUV4hg94NbQ11bxW6DtoUDhUCiZ+vXCNBkSLBVnRoX0k5tnSdjYXK5rM7ozWgGTFEHSrOQsIjgR+05ccpYExgD1eo2Tfv17PbQ0V8owVBb3uxpxspdcTdILCirjOzBDBIt+6OiGz5uVzzVL10budZ1r/Zx0hj97yeaSE10XsnIo9TUeFzYIboB1PzBQduRWQQVZLPXxvPWxqxXbFz+X+amvaOMZaPC4lL5/tUC9pAhh79A0gPzWuQVBtpw8bq7z7isp9i8LFWhEI9eWvaijyr0EuHXaA+Lcc/p3Iq5G9EdiAChHWHLuAS0pjm3HS0TNioUYkid08LSyHACY5VehIPxPU8yZ16fm+1r5a64zJcjuBJrHMhtmmyxbNs7a9CCSpNXFt2JQV6kWKX8U/hCPfP4/zTLTrIi9w2hlMosnUYvF1U1Z8IM6IwlLDzzYW93bS5KvmwxU=) 2025-06-22 19:28:46.550779 | orchestrator | 2025-06-22 19:28:46.551152 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-22 19:28:46.552299 | orchestrator | Sunday 22 June 2025 19:28:46 +0000 (0:00:01.069) 0:00:08.791 *********** 2025-06-22 19:28:47.595700 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCnyiQaT6uAaIJAHH4YlXRQrIaZJT27WK0lBPIQ0ORJ0F+wasqb6CYEcOGzTMnOXxZrD2ASMoj2EOJSRArIbLxmsovATP1k+DjfieLbhu+xebLcifSIKzCz3APqzTupzE90dJ9G6nykKY1LYm2V3MyuziYrYhRU/HDmozSOZJZ9WyHPAhAqA5fVlw5tkr7uLhqZRzsFBKESx3S0XkCjnArOzCQq15EkhKutgfquU9+ZfIpdLju8CEL4ZJo4EkR1QVUhGIPsIwimUfHGpnC010BYfNj+0bspNSfwipz5+ZIzpZFSlV0ixFvpxcjK3I9sEpBzlSNJabq+7qydYWCxJOi0pJXGI9O8Yom0ip0w+RQ2vFAcaURkCULCsbDD/1wIvNeSd3lNCHrXUKHAwc55DHHkZO0BnCgDGEGQaY8TrjFtQUmNTrfgKWhdBIcTWWcDBev8B0vRxVOoiN+VGOsHrWzPIRS2m+3hi6VLyv736U7YuaME2p50wZ66JjrQJT3q2gU=) 2025-06-22 19:28:47.596309 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINzJdHx7tMYuAwv8cbgnr2BUjNsg8ih5lWIskv4Zr1yT) 2025-06-22 19:28:47.596653 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDpt09cnexlvjbvg200QkHDBWIlUHZ3wFMSjj4S5UQT/qoFggUjqeakZ4QJ6Oh0nj51jnUkPA2lg7vqkVxeO7u0=) 2025-06-22 19:28:47.597832 | orchestrator | 2025-06-22 19:28:47.598890 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-22 19:28:47.599499 | orchestrator | Sunday 22 June 2025 19:28:47 +0000 (0:00:01.048) 0:00:09.840 *********** 2025-06-22 19:28:48.688449 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC8QTsmFJpjkmZ6lVZUmK6DzRIgprIlOnEkzWS2+jeVBJxP8qTZX9x7MaWe+ZcjyLW92Cll5KAVhdVHWOobntgVG8NttyO3ySwCojIp8GNQlrBhTCsfTIcKj9hWHkAZYMQAShYT0RYQ1vio7CBBTP5vMfJ290BSFKWHKx5Q+dTMChsgdXOzoyNlBQdSt7/gqnuk0wCJN/xEKCbppRlSCGfd2eKtAK2KK+8jz34BpeDnmfLc1sWPtJYcmJL4+YDXrqTaJIGl7lsEjVZH3cWj9xAPHJjmAGMfLXbNc0aXc3XsLUK5UAIRlZE7ke62l0ALaeHV71td69ORyOntMftFic6Nl/bQDmY+GVPeSjlmS218ZxqLZSHefikyKtROFSfsVecw7jtvd7gP0McklHNJlxrs5oU8moWvGOAY7nTuPab2O/qY+O8ZOsZVzCjor7Xe6/Au2nSb6jZ74OVayS94y5GdaQ3gtK0bd3LS1GUpid27vx/rk9o67uMlxnwo4Tqdo4k=) 2025-06-22 19:28:48.688603 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGT6OWTlu77To5aQj+Kqqpvyn1dE0tC830qiB5ims7x0l9pinyKUDttKBsq1r71vYr6Lqb6BIvPH9F8DRPF2Id8=) 2025-06-22 19:28:48.689210 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIL0ZcepqjZXVN9Yu8udiOQ8r4mlgPN/TuBMaWRwK7spg) 2025-06-22 19:28:48.690372 | orchestrator | 2025-06-22 19:28:48.691360 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-22 19:28:48.692433 | orchestrator | Sunday 22 June 2025 19:28:48 +0000 (0:00:01.092) 0:00:10.932 *********** 2025-06-22 19:28:49.634780 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGYEhhsVmzyCDMg85QcmzyfP1AOgYBw1pDvyYSnIdXbE) 2025-06-22 19:28:49.635507 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDvTrS2E0Jg1ox53FPloc8w11G7PWtQR7KGfcAO7h4faZy17OvFaCd1nxwkTxNkvQ9BEI/amGWf8QZ9cFO/NperX6qQF9X/CfAZjACCzZMRBLEdntdEpte+qIZnKtD5UXzrLYavAsvCcvedyZIEvN9XxZS7RQcZYTlKSv3J/qcK+kH9RRfVy43qum8YEJ3d6YPys7pDuqr6ey3WCcMX6bFNzGv36h20lv+nWlWRcK0gTqxcSXTe7JheeSEJiT7MI1mYxDtzpR0GZUCGRbivYeOT+RnEIB08p2iI/cxyY59RAWCl50hMld3gBJTy6e8EEN5PhHcBYfJwsXuzh1xbhdNw+OAaognn/I3WNjAOpWcLuaMcxDyyFQVJfXvymvFo89ZjlU1glBj/NEFCIQTdfFE9g2N/Wbijm23gM7CbyKZ36NvciZxzVifKL47EA6A/DLi8H8gs+IYCtH/xmxz6DzQtTPVVmCzw7UI+6K7ciSAI0GBge3jEZkao99wr6IggHHM=) 2025-06-22 19:28:49.636285 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFCCHmrzyitX42fmvYCV3uG50JaaoYiQaBa2YvolpKrDtV1BUdyIQcFCEYQjK3YtWwvCV76ma3H+UYKQutKDHmw=) 2025-06-22 19:28:49.637106 | orchestrator | 2025-06-22 19:28:49.637848 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-22 19:28:49.638428 | orchestrator | Sunday 22 June 2025 19:28:49 +0000 (0:00:00.947) 0:00:11.879 *********** 2025-06-22 19:28:50.583921 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPGEnvyAVjtf9crKNLiFVZUwljhMJVPk+N0tmAv7WUsTdDXa1KZAyGASMPDKqUXGmjjFXFxTVgXn7Wmr01twCFA=) 2025-06-22 19:28:50.584246 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFREVjaLWS491QN1BtwnMdYn/mKyzp82LN4vLigRTPjc) 2025-06-22 19:28:50.585723 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDTt+ouyN9b8tRSoX7SfumEa+pD7HlxXMzhD7GWY3o4JgcFRTw6POLYB0bNoUGbTf/TLSAccnGKcGlB8GJMUEDMtGMNDTtj6tuZBxjyH919JopB7tJnEC+in2XZCIDhfqTlehm/iE5FlSQpZ11/1NYfnDXkCPuZnPS0CZ451se2VjWCXjFYwQE1r0P1yVQRsnd7OdLmoqYM0m+pi2ADl2wxlrALI1XZ5ZOCCMN5hJmwo3Sirrx3PntL12bUEuDaDaU7CPZ+HaPRKVT0LTKvDPy5xtay7lYDBuEsk8E3j+vSt5Wu7xQwvmLnWQMP4nVGIhTrp9UYUvoL5vJD0w/5d6mobqj2TFS4XxE2+Ta29FEFoMLDVTctsJgXOd8rNp+5t3mQwn3e4LyRPEDbdvESQ6XsNkvpXgKO6zFEH6AC3Me7a9O2DTty82/E0wdaZ8vwIxC97tSXSDwhxz5PuGWwQxW4emLbFnsa42NppVo9Cyx4MB7F9F6NYswd8Hd417WlqNs=) 2025-06-22 19:28:50.586553 | orchestrator | 2025-06-22 19:28:50.587412 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-22 19:28:50.588528 | orchestrator | Sunday 22 June 2025 19:28:50 +0000 (0:00:00.949) 0:00:12.829 *********** 2025-06-22 19:28:51.559278 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHiGBxhj4Fs10/yKmKVxzh89lUrRUGRLgRkJTKrbQDzBilnubqn9uyq+EJT/vEyWL/S6qt9r88AB+lDg/NmXVmQ=) 2025-06-22 19:28:51.560345 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC1SsPIant0DAtATjfy2xLBUEF2PydFVBQX9IM7kTbYEi1yKayaE+8pCQM2+P72mrjEmiJtLKOhPDF7EQLlJmgtduP6Rs2yJ9geT0rLgfgWT1o7vzh5eF9nRJy+SiTvCfK9RJsvvZ/cMHCJ6gO7AeZNankXlhqgr0sbvy+4AmnXlkGkbUk48BCZvSza0c8P8VAR7r3Ec8qVeB2uZY/w6kfqo6k+B3ektDSzRSTqLDhD91ynyRGacGn+/J/szfyX20l2+YTQkQjkr0kbbTBYqqNTbu/LdbQsKD4g7kSXQyJyxa9uDzObi+CwRuteGyaUD8RKkgnFit1DtY/IWMoZ2H1Jk6xJ4GM9E7KQ1zBekjy8t9uhnj6X12Zvbjh5hIVC/jOctuU4z2YIDm/X+oMLGUPyX15kLXbojILEPP2CQ8+eF0blu5MPocOBZ5qaO6tmNJ++Ume94Or1kHCs8a63XhLBPWcQavvyEpPpCkXY7aShrRNMnV/MKjWXUT8X5AQulSs=) 2025-06-22 19:28:51.561316 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMU2nhBBuo6UAUSBUPto9X0LitqNll04CntZf+m/1CKk) 2025-06-22 19:28:51.561884 | orchestrator | 2025-06-22 19:28:51.562695 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-06-22 19:28:51.563220 | orchestrator | Sunday 22 June 2025 19:28:51 +0000 (0:00:00.975) 0:00:13.804 *********** 2025-06-22 19:28:56.591793 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-06-22 19:28:56.592163 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-06-22 19:28:56.592958 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-06-22 19:28:56.594171 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-06-22 19:28:56.595590 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-06-22 19:28:56.596171 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-06-22 19:28:56.597123 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-06-22 19:28:56.597499 | orchestrator | 2025-06-22 19:28:56.598256 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-06-22 19:28:56.598805 | orchestrator | Sunday 22 June 2025 19:28:56 +0000 (0:00:05.032) 0:00:18.836 *********** 2025-06-22 19:28:56.747607 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-06-22 19:28:56.749063 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-06-22 19:28:56.749091 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-06-22 19:28:56.749998 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-06-22 19:28:56.750940 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-06-22 19:28:56.751541 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-06-22 19:28:56.752335 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-06-22 19:28:56.752962 | orchestrator | 2025-06-22 19:28:56.753546 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-22 19:28:56.754189 | orchestrator | Sunday 22 June 2025 19:28:56 +0000 (0:00:00.157) 0:00:18.993 *********** 2025-06-22 19:28:57.777894 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMrp8qIrGYq/IsDyaLjIlh7KAl1H3HB4v7L/aKSUZy2k) 2025-06-22 19:28:57.778449 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCgrBgXd3/cFBbXIo2wqTXxBjMeyw6qLkkDmWsE/DOzWPK/xndhvYB6gG4P5zrzavhMzP7C98BTNrD7T8+M6g4pOXhzsDOvFQzl21qLUeaH9+rVHYuWVbo4sVfsL98nBi7srpwoKskhUAmMT7hwIAgKPtRClmXrgls7Tap2Wu1RZ8ydZt7crtqNLrJzamGL/9fOalvMLWlP1DWBKOZ7yYCkxpcNbRl87GG+zt2zl8YhpORyeAB+jFAZOS7B2pQcMTHKXVN+dK+x8uGuTG7mlecbF1lYbXfMkki8C41jF1b9Qk5kS/I5x/UVta0/QmmaYYOvvytwTi1c16g2SFmwzHdrgjVpVEF2LvdEHTg4JqKIRVxrzI5vv8V4Mx+AgbPVz9/KdZ1DgGOgJgiFLUjjV1KXyTW/fcydXv+bgsqylinSNuxRwo3TFnZhaRcE0SZYepOwPwtnsXLu+KKw0eLEQQXBUH0E3eTRtptvBA19SUtEiUomKs6HxC+ythpPPUAGnt0=) 2025-06-22 19:28:57.779444 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJfqQpdt9uGhEsOunxEqrNlw509YIUj+3yl2GcXsVwkOy7Fof0eECFEZH2PBEQJnclSpy7evV+eLpjQUscwhlKQ=) 2025-06-22 19:28:57.780105 | orchestrator | 2025-06-22 19:28:57.780703 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-22 19:28:57.781410 | orchestrator | Sunday 22 June 2025 19:28:57 +0000 (0:00:01.028) 0:00:20.022 *********** 2025-06-22 19:28:58.725682 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINKpp9kwoKDAOrN3p03uVWi6O3ZUuj5XX2a8XiQSN88c) 2025-06-22 19:28:58.725849 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDgQXBqjiuuar5eyP8GcuyyL1Qp4aSNnqkS3FhDEduxI/cC0od5yWzEX40SbUV4hg94NbQ11bxW6DtoUDhUCiZ+vXCNBkSLBVnRoX0k5tnSdjYXK5rM7ozWgGTFEHSrOQsIjgR+05ccpYExgD1eo2Tfv17PbQ0V8owVBb3uxpxspdcTdILCirjOzBDBIt+6OiGz5uVzzVL10budZ1r/Zx0hj97yeaSE10XsnIo9TUeFzYIboB1PzBQduRWQQVZLPXxvPWxqxXbFz+X+amvaOMZaPC4lL5/tUC9pAhh79A0gPzWuQVBtpw8bq7z7isp9i8LFWhEI9eWvaijyr0EuHXaA+Lcc/p3Iq5G9EdiAChHWHLuAS0pjm3HS0TNioUYkid08LSyHACY5VehIPxPU8yZ16fm+1r5a64zJcjuBJrHMhtmmyxbNs7a9CCSpNXFt2JQV6kWKX8U/hCPfP4/zTLTrIi9w2hlMosnUYvF1U1Z8IM6IwlLDzzYW93bS5KvmwxU=) 2025-06-22 19:28:58.726927 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOeEDyldXsk5Mq+hopVkVocSyOLlZsXpLOwNMGOyT8XOKd/TAepXdGpQyHrNv8BL3CNwETl0HQ+BPLDFq6iYqZ8=) 2025-06-22 19:28:58.728016 | orchestrator | 2025-06-22 19:28:58.729074 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-22 19:28:58.729718 | orchestrator | Sunday 22 June 2025 19:28:58 +0000 (0:00:00.947) 0:00:20.970 *********** 2025-06-22 19:28:59.704879 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDpt09cnexlvjbvg200QkHDBWIlUHZ3wFMSjj4S5UQT/qoFggUjqeakZ4QJ6Oh0nj51jnUkPA2lg7vqkVxeO7u0=) 2025-06-22 19:28:59.707922 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCnyiQaT6uAaIJAHH4YlXRQrIaZJT27WK0lBPIQ0ORJ0F+wasqb6CYEcOGzTMnOXxZrD2ASMoj2EOJSRArIbLxmsovATP1k+DjfieLbhu+xebLcifSIKzCz3APqzTupzE90dJ9G6nykKY1LYm2V3MyuziYrYhRU/HDmozSOZJZ9WyHPAhAqA5fVlw5tkr7uLhqZRzsFBKESx3S0XkCjnArOzCQq15EkhKutgfquU9+ZfIpdLju8CEL4ZJo4EkR1QVUhGIPsIwimUfHGpnC010BYfNj+0bspNSfwipz5+ZIzpZFSlV0ixFvpxcjK3I9sEpBzlSNJabq+7qydYWCxJOi0pJXGI9O8Yom0ip0w+RQ2vFAcaURkCULCsbDD/1wIvNeSd3lNCHrXUKHAwc55DHHkZO0BnCgDGEGQaY8TrjFtQUmNTrfgKWhdBIcTWWcDBev8B0vRxVOoiN+VGOsHrWzPIRS2m+3hi6VLyv736U7YuaME2p50wZ66JjrQJT3q2gU=) 2025-06-22 19:28:59.708603 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINzJdHx7tMYuAwv8cbgnr2BUjNsg8ih5lWIskv4Zr1yT) 2025-06-22 19:28:59.709006 | orchestrator | 2025-06-22 19:28:59.709369 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-22 19:28:59.709881 | orchestrator | Sunday 22 June 2025 19:28:59 +0000 (0:00:00.979) 0:00:21.949 *********** 2025-06-22 19:29:00.681744 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC8QTsmFJpjkmZ6lVZUmK6DzRIgprIlOnEkzWS2+jeVBJxP8qTZX9x7MaWe+ZcjyLW92Cll5KAVhdVHWOobntgVG8NttyO3ySwCojIp8GNQlrBhTCsfTIcKj9hWHkAZYMQAShYT0RYQ1vio7CBBTP5vMfJ290BSFKWHKx5Q+dTMChsgdXOzoyNlBQdSt7/gqnuk0wCJN/xEKCbppRlSCGfd2eKtAK2KK+8jz34BpeDnmfLc1sWPtJYcmJL4+YDXrqTaJIGl7lsEjVZH3cWj9xAPHJjmAGMfLXbNc0aXc3XsLUK5UAIRlZE7ke62l0ALaeHV71td69ORyOntMftFic6Nl/bQDmY+GVPeSjlmS218ZxqLZSHefikyKtROFSfsVecw7jtvd7gP0McklHNJlxrs5oU8moWvGOAY7nTuPab2O/qY+O8ZOsZVzCjor7Xe6/Au2nSb6jZ74OVayS94y5GdaQ3gtK0bd3LS1GUpid27vx/rk9o67uMlxnwo4Tqdo4k=) 2025-06-22 19:29:00.682341 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGT6OWTlu77To5aQj+Kqqpvyn1dE0tC830qiB5ims7x0l9pinyKUDttKBsq1r71vYr6Lqb6BIvPH9F8DRPF2Id8=) 2025-06-22 19:29:00.682722 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIL0ZcepqjZXVN9Yu8udiOQ8r4mlgPN/TuBMaWRwK7spg) 2025-06-22 19:29:00.683338 | orchestrator | 2025-06-22 19:29:00.683973 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-22 19:29:00.684580 | orchestrator | Sunday 22 June 2025 19:29:00 +0000 (0:00:00.977) 0:00:22.927 *********** 2025-06-22 19:29:01.732128 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDvTrS2E0Jg1ox53FPloc8w11G7PWtQR7KGfcAO7h4faZy17OvFaCd1nxwkTxNkvQ9BEI/amGWf8QZ9cFO/NperX6qQF9X/CfAZjACCzZMRBLEdntdEpte+qIZnKtD5UXzrLYavAsvCcvedyZIEvN9XxZS7RQcZYTlKSv3J/qcK+kH9RRfVy43qum8YEJ3d6YPys7pDuqr6ey3WCcMX6bFNzGv36h20lv+nWlWRcK0gTqxcSXTe7JheeSEJiT7MI1mYxDtzpR0GZUCGRbivYeOT+RnEIB08p2iI/cxyY59RAWCl50hMld3gBJTy6e8EEN5PhHcBYfJwsXuzh1xbhdNw+OAaognn/I3WNjAOpWcLuaMcxDyyFQVJfXvymvFo89ZjlU1glBj/NEFCIQTdfFE9g2N/Wbijm23gM7CbyKZ36NvciZxzVifKL47EA6A/DLi8H8gs+IYCtH/xmxz6DzQtTPVVmCzw7UI+6K7ciSAI0GBge3jEZkao99wr6IggHHM=) 2025-06-22 19:29:01.732579 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFCCHmrzyitX42fmvYCV3uG50JaaoYiQaBa2YvolpKrDtV1BUdyIQcFCEYQjK3YtWwvCV76ma3H+UYKQutKDHmw=) 2025-06-22 19:29:01.733859 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGYEhhsVmzyCDMg85QcmzyfP1AOgYBw1pDvyYSnIdXbE) 2025-06-22 19:29:01.734401 | orchestrator | 2025-06-22 19:29:01.735414 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-22 19:29:01.736012 | orchestrator | Sunday 22 June 2025 19:29:01 +0000 (0:00:01.047) 0:00:23.975 *********** 2025-06-22 19:29:02.771712 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDTt+ouyN9b8tRSoX7SfumEa+pD7HlxXMzhD7GWY3o4JgcFRTw6POLYB0bNoUGbTf/TLSAccnGKcGlB8GJMUEDMtGMNDTtj6tuZBxjyH919JopB7tJnEC+in2XZCIDhfqTlehm/iE5FlSQpZ11/1NYfnDXkCPuZnPS0CZ451se2VjWCXjFYwQE1r0P1yVQRsnd7OdLmoqYM0m+pi2ADl2wxlrALI1XZ5ZOCCMN5hJmwo3Sirrx3PntL12bUEuDaDaU7CPZ+HaPRKVT0LTKvDPy5xtay7lYDBuEsk8E3j+vSt5Wu7xQwvmLnWQMP4nVGIhTrp9UYUvoL5vJD0w/5d6mobqj2TFS4XxE2+Ta29FEFoMLDVTctsJgXOd8rNp+5t3mQwn3e4LyRPEDbdvESQ6XsNkvpXgKO6zFEH6AC3Me7a9O2DTty82/E0wdaZ8vwIxC97tSXSDwhxz5PuGWwQxW4emLbFnsa42NppVo9Cyx4MB7F9F6NYswd8Hd417WlqNs=) 2025-06-22 19:29:02.772251 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPGEnvyAVjtf9crKNLiFVZUwljhMJVPk+N0tmAv7WUsTdDXa1KZAyGASMPDKqUXGmjjFXFxTVgXn7Wmr01twCFA=) 2025-06-22 19:29:02.773267 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFREVjaLWS491QN1BtwnMdYn/mKyzp82LN4vLigRTPjc) 2025-06-22 19:29:02.774173 | orchestrator | 2025-06-22 19:29:02.775237 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-06-22 19:29:02.775742 | orchestrator | Sunday 22 June 2025 19:29:02 +0000 (0:00:01.041) 0:00:25.016 *********** 2025-06-22 19:29:03.833238 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC1SsPIant0DAtATjfy2xLBUEF2PydFVBQX9IM7kTbYEi1yKayaE+8pCQM2+P72mrjEmiJtLKOhPDF7EQLlJmgtduP6Rs2yJ9geT0rLgfgWT1o7vzh5eF9nRJy+SiTvCfK9RJsvvZ/cMHCJ6gO7AeZNankXlhqgr0sbvy+4AmnXlkGkbUk48BCZvSza0c8P8VAR7r3Ec8qVeB2uZY/w6kfqo6k+B3ektDSzRSTqLDhD91ynyRGacGn+/J/szfyX20l2+YTQkQjkr0kbbTBYqqNTbu/LdbQsKD4g7kSXQyJyxa9uDzObi+CwRuteGyaUD8RKkgnFit1DtY/IWMoZ2H1Jk6xJ4GM9E7KQ1zBekjy8t9uhnj6X12Zvbjh5hIVC/jOctuU4z2YIDm/X+oMLGUPyX15kLXbojILEPP2CQ8+eF0blu5MPocOBZ5qaO6tmNJ++Ume94Or1kHCs8a63XhLBPWcQavvyEpPpCkXY7aShrRNMnV/MKjWXUT8X5AQulSs=) 2025-06-22 19:29:03.834991 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHiGBxhj4Fs10/yKmKVxzh89lUrRUGRLgRkJTKrbQDzBilnubqn9uyq+EJT/vEyWL/S6qt9r88AB+lDg/NmXVmQ=) 2025-06-22 19:29:03.835765 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMU2nhBBuo6UAUSBUPto9X0LitqNll04CntZf+m/1CKk) 2025-06-22 19:29:03.837020 | orchestrator | 2025-06-22 19:29:03.837989 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-06-22 19:29:03.838857 | orchestrator | Sunday 22 June 2025 19:29:03 +0000 (0:00:01.060) 0:00:26.077 *********** 2025-06-22 19:29:03.988033 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-06-22 19:29:03.988117 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-06-22 19:29:03.989166 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-06-22 19:29:03.989761 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-06-22 19:29:03.990801 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-06-22 19:29:03.991058 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-06-22 19:29:03.992191 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-06-22 19:29:03.993290 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:29:03.994332 | orchestrator | 2025-06-22 19:29:03.995090 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-06-22 19:29:03.995950 | orchestrator | Sunday 22 June 2025 19:29:03 +0000 (0:00:00.157) 0:00:26.234 *********** 2025-06-22 19:29:04.057219 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:29:04.058772 | orchestrator | 2025-06-22 19:29:04.058841 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-06-22 19:29:04.059305 | orchestrator | Sunday 22 June 2025 19:29:04 +0000 (0:00:00.068) 0:00:26.302 *********** 2025-06-22 19:29:04.120728 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:29:04.122794 | orchestrator | 2025-06-22 19:29:04.123477 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-06-22 19:29:04.125201 | orchestrator | Sunday 22 June 2025 19:29:04 +0000 (0:00:00.063) 0:00:26.366 *********** 2025-06-22 19:29:04.612594 | orchestrator | changed: [testbed-manager] 2025-06-22 19:29:04.612770 | orchestrator | 2025-06-22 19:29:04.614660 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:29:04.614837 | orchestrator | 2025-06-22 19:29:04 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-22 19:29:04.614860 | orchestrator | 2025-06-22 19:29:04 | INFO  | Please wait and do not abort execution. 2025-06-22 19:29:04.614932 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-22 19:29:04.616864 | orchestrator | 2025-06-22 19:29:04.617437 | orchestrator | 2025-06-22 19:29:04.619695 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:29:04.619796 | orchestrator | Sunday 22 June 2025 19:29:04 +0000 (0:00:00.489) 0:00:26.855 *********** 2025-06-22 19:29:04.620445 | orchestrator | =============================================================================== 2025-06-22 19:29:04.623265 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.14s 2025-06-22 19:29:04.623339 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.03s 2025-06-22 19:29:04.623990 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.23s 2025-06-22 19:29:04.624257 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2025-06-22 19:29:04.624588 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2025-06-22 19:29:04.625484 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2025-06-22 19:29:04.626372 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-06-22 19:29:04.626679 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-06-22 19:29:04.627106 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-06-22 19:29:04.627647 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-06-22 19:29:04.628117 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.98s 2025-06-22 19:29:04.631075 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.98s 2025-06-22 19:29:04.633231 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.98s 2025-06-22 19:29:04.634186 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.95s 2025-06-22 19:29:04.634742 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.95s 2025-06-22 19:29:04.634882 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.95s 2025-06-22 19:29:04.636110 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.49s 2025-06-22 19:29:04.636363 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.18s 2025-06-22 19:29:04.637099 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.16s 2025-06-22 19:29:04.637651 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.16s 2025-06-22 19:29:05.076732 | orchestrator | + osism apply squid 2025-06-22 19:29:06.728611 | orchestrator | Registering Redlock._acquired_script 2025-06-22 19:29:06.728701 | orchestrator | Registering Redlock._extend_script 2025-06-22 19:29:06.728734 | orchestrator | Registering Redlock._release_script 2025-06-22 19:29:06.789257 | orchestrator | 2025-06-22 19:29:06 | INFO  | Task ba87ed82-d8fb-49a1-9ae3-722f2aedfe08 (squid) was prepared for execution. 2025-06-22 19:29:06.789341 | orchestrator | 2025-06-22 19:29:06 | INFO  | It takes a moment until task ba87ed82-d8fb-49a1-9ae3-722f2aedfe08 (squid) has been started and output is visible here. 2025-06-22 19:29:10.730756 | orchestrator | 2025-06-22 19:29:10.731383 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-06-22 19:29:10.732657 | orchestrator | 2025-06-22 19:29:10.733513 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-06-22 19:29:10.735157 | orchestrator | Sunday 22 June 2025 19:29:10 +0000 (0:00:00.166) 0:00:00.166 *********** 2025-06-22 19:29:10.813032 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-06-22 19:29:10.814125 | orchestrator | 2025-06-22 19:29:10.815247 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-06-22 19:29:10.816555 | orchestrator | Sunday 22 June 2025 19:29:10 +0000 (0:00:00.086) 0:00:00.252 *********** 2025-06-22 19:29:12.144379 | orchestrator | ok: [testbed-manager] 2025-06-22 19:29:12.146073 | orchestrator | 2025-06-22 19:29:12.146480 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-06-22 19:29:12.147362 | orchestrator | Sunday 22 June 2025 19:29:12 +0000 (0:00:01.331) 0:00:01.583 *********** 2025-06-22 19:29:13.328750 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-06-22 19:29:13.329584 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-06-22 19:29:13.330858 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-06-22 19:29:13.331990 | orchestrator | 2025-06-22 19:29:13.332947 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-06-22 19:29:13.333907 | orchestrator | Sunday 22 June 2025 19:29:13 +0000 (0:00:01.182) 0:00:02.766 *********** 2025-06-22 19:29:14.436641 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-06-22 19:29:14.437080 | orchestrator | 2025-06-22 19:29:14.438590 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-06-22 19:29:14.439732 | orchestrator | Sunday 22 June 2025 19:29:14 +0000 (0:00:01.108) 0:00:03.874 *********** 2025-06-22 19:29:14.808744 | orchestrator | ok: [testbed-manager] 2025-06-22 19:29:14.808858 | orchestrator | 2025-06-22 19:29:14.809665 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-06-22 19:29:14.810500 | orchestrator | Sunday 22 June 2025 19:29:14 +0000 (0:00:00.372) 0:00:04.246 *********** 2025-06-22 19:29:15.745969 | orchestrator | changed: [testbed-manager] 2025-06-22 19:29:15.746876 | orchestrator | 2025-06-22 19:29:15.746934 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-06-22 19:29:15.747079 | orchestrator | Sunday 22 June 2025 19:29:15 +0000 (0:00:00.935) 0:00:05.181 *********** 2025-06-22 19:29:47.785974 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-06-22 19:29:47.786156 | orchestrator | ok: [testbed-manager] 2025-06-22 19:29:47.786787 | orchestrator | 2025-06-22 19:29:47.788090 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-06-22 19:29:47.789685 | orchestrator | Sunday 22 June 2025 19:29:47 +0000 (0:00:32.039) 0:00:37.221 *********** 2025-06-22 19:30:00.234297 | orchestrator | changed: [testbed-manager] 2025-06-22 19:30:00.234415 | orchestrator | 2025-06-22 19:30:00.234433 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-06-22 19:30:00.234446 | orchestrator | Sunday 22 June 2025 19:30:00 +0000 (0:00:12.444) 0:00:49.666 *********** 2025-06-22 19:31:00.306834 | orchestrator | Pausing for 60 seconds 2025-06-22 19:31:00.306943 | orchestrator | changed: [testbed-manager] 2025-06-22 19:31:00.306958 | orchestrator | 2025-06-22 19:31:00.306970 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-06-22 19:31:00.306983 | orchestrator | Sunday 22 June 2025 19:31:00 +0000 (0:01:00.071) 0:01:49.738 *********** 2025-06-22 19:31:00.365049 | orchestrator | ok: [testbed-manager] 2025-06-22 19:31:00.366847 | orchestrator | 2025-06-22 19:31:00.366885 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-06-22 19:31:00.368706 | orchestrator | Sunday 22 June 2025 19:31:00 +0000 (0:00:00.066) 0:01:49.804 *********** 2025-06-22 19:31:01.015715 | orchestrator | changed: [testbed-manager] 2025-06-22 19:31:01.016869 | orchestrator | 2025-06-22 19:31:01.017045 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:31:01.017484 | orchestrator | 2025-06-22 19:31:01 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-22 19:31:01.017552 | orchestrator | 2025-06-22 19:31:01 | INFO  | Please wait and do not abort execution. 2025-06-22 19:31:01.018318 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:31:01.019219 | orchestrator | 2025-06-22 19:31:01.019818 | orchestrator | 2025-06-22 19:31:01.020209 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:31:01.021178 | orchestrator | Sunday 22 June 2025 19:31:01 +0000 (0:00:00.650) 0:01:50.454 *********** 2025-06-22 19:31:01.021899 | orchestrator | =============================================================================== 2025-06-22 19:31:01.022387 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.07s 2025-06-22 19:31:01.023037 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 32.04s 2025-06-22 19:31:01.023350 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.44s 2025-06-22 19:31:01.024304 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.33s 2025-06-22 19:31:01.024695 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.18s 2025-06-22 19:31:01.025164 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.11s 2025-06-22 19:31:01.025638 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.94s 2025-06-22 19:31:01.026098 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.65s 2025-06-22 19:31:01.027153 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.37s 2025-06-22 19:31:01.027948 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.09s 2025-06-22 19:31:01.028558 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2025-06-22 19:31:01.482152 | orchestrator | + [[ 9.1.0 != \l\a\t\e\s\t ]] 2025-06-22 19:31:01.482219 | orchestrator | + sed -i 's#docker_namespace: kolla#docker_namespace: kolla/release#' /opt/configuration/inventory/group_vars/all/kolla.yml 2025-06-22 19:31:01.487838 | orchestrator | ++ semver 9.1.0 9.0.0 2025-06-22 19:31:01.557041 | orchestrator | + [[ 1 -lt 0 ]] 2025-06-22 19:31:01.557563 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-06-22 19:31:03.194257 | orchestrator | Registering Redlock._acquired_script 2025-06-22 19:31:03.194374 | orchestrator | Registering Redlock._extend_script 2025-06-22 19:31:03.194389 | orchestrator | Registering Redlock._release_script 2025-06-22 19:31:03.247886 | orchestrator | 2025-06-22 19:31:03 | INFO  | Task 3046a16f-5e59-4831-a728-85fc6167e8df (operator) was prepared for execution. 2025-06-22 19:31:03.247978 | orchestrator | 2025-06-22 19:31:03 | INFO  | It takes a moment until task 3046a16f-5e59-4831-a728-85fc6167e8df (operator) has been started and output is visible here. 2025-06-22 19:31:07.193086 | orchestrator | 2025-06-22 19:31:07.199892 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-06-22 19:31:07.201307 | orchestrator | 2025-06-22 19:31:07.202699 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-22 19:31:07.205681 | orchestrator | Sunday 22 June 2025 19:31:07 +0000 (0:00:00.149) 0:00:00.149 *********** 2025-06-22 19:31:10.417279 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:31:10.417374 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:31:10.417382 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:31:10.418194 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:31:10.419735 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:31:10.421043 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:31:10.423803 | orchestrator | 2025-06-22 19:31:10.423862 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-06-22 19:31:10.423877 | orchestrator | Sunday 22 June 2025 19:31:10 +0000 (0:00:03.228) 0:00:03.378 *********** 2025-06-22 19:31:11.294082 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:31:11.294556 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:31:11.296143 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:31:11.296935 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:31:11.297567 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:31:11.298327 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:31:11.298977 | orchestrator | 2025-06-22 19:31:11.299448 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-06-22 19:31:11.299775 | orchestrator | 2025-06-22 19:31:11.300889 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-06-22 19:31:11.301187 | orchestrator | Sunday 22 June 2025 19:31:11 +0000 (0:00:00.875) 0:00:04.253 *********** 2025-06-22 19:31:11.371447 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:31:11.392863 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:31:11.418074 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:31:11.458195 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:31:11.459717 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:31:11.460860 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:31:11.462143 | orchestrator | 2025-06-22 19:31:11.463268 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-06-22 19:31:11.464217 | orchestrator | Sunday 22 June 2025 19:31:11 +0000 (0:00:00.164) 0:00:04.418 *********** 2025-06-22 19:31:11.523274 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:31:11.550455 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:31:11.580205 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:31:11.655112 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:31:11.656172 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:31:11.658854 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:31:11.658888 | orchestrator | 2025-06-22 19:31:11.658902 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-06-22 19:31:11.659305 | orchestrator | Sunday 22 June 2025 19:31:11 +0000 (0:00:00.197) 0:00:04.615 *********** 2025-06-22 19:31:12.259378 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:31:12.259509 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:31:12.259680 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:31:12.263481 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:31:12.263533 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:31:12.263595 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:31:12.263653 | orchestrator | 2025-06-22 19:31:12.263990 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-06-22 19:31:12.264184 | orchestrator | Sunday 22 June 2025 19:31:12 +0000 (0:00:00.604) 0:00:05.219 *********** 2025-06-22 19:31:13.003350 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:31:13.004422 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:31:13.004458 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:31:13.005875 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:31:13.006194 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:31:13.007206 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:31:13.007623 | orchestrator | 2025-06-22 19:31:13.008709 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-06-22 19:31:13.009334 | orchestrator | Sunday 22 June 2025 19:31:12 +0000 (0:00:00.742) 0:00:05.962 *********** 2025-06-22 19:31:14.164212 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-06-22 19:31:14.165798 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-06-22 19:31:14.167379 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-06-22 19:31:14.168458 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-06-22 19:31:14.168731 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-06-22 19:31:14.170744 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-06-22 19:31:14.170839 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-06-22 19:31:14.170860 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-06-22 19:31:14.172406 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-06-22 19:31:14.172752 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-06-22 19:31:14.173321 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-06-22 19:31:14.173445 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-06-22 19:31:14.173959 | orchestrator | 2025-06-22 19:31:14.174815 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-06-22 19:31:14.175052 | orchestrator | Sunday 22 June 2025 19:31:14 +0000 (0:00:01.157) 0:00:07.119 *********** 2025-06-22 19:31:15.407285 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:31:15.407503 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:31:15.409257 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:31:15.410268 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:31:15.411335 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:31:15.412337 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:31:15.415046 | orchestrator | 2025-06-22 19:31:15.415081 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-06-22 19:31:15.415094 | orchestrator | Sunday 22 June 2025 19:31:15 +0000 (0:00:01.245) 0:00:08.365 *********** 2025-06-22 19:31:16.605750 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-06-22 19:31:16.607432 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-06-22 19:31:16.608743 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-06-22 19:31:16.695684 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-06-22 19:31:16.697503 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-06-22 19:31:16.698429 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-06-22 19:31:16.699839 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-06-22 19:31:16.700983 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-06-22 19:31:16.701879 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-06-22 19:31:16.702501 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-06-22 19:31:16.703529 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-06-22 19:31:16.704600 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-06-22 19:31:16.705540 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-06-22 19:31:16.707635 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-06-22 19:31:16.708082 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-06-22 19:31:16.708972 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-06-22 19:31:16.709737 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-06-22 19:31:16.710311 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-06-22 19:31:16.711161 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-06-22 19:31:16.711981 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-06-22 19:31:16.712986 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-06-22 19:31:16.714171 | orchestrator | 2025-06-22 19:31:16.714969 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-06-22 19:31:16.715250 | orchestrator | Sunday 22 June 2025 19:31:16 +0000 (0:00:01.290) 0:00:09.655 *********** 2025-06-22 19:31:17.255070 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:31:17.255915 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:31:17.256552 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:31:17.257838 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:31:17.259082 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:31:17.259820 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:31:17.260441 | orchestrator | 2025-06-22 19:31:17.261159 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-06-22 19:31:17.261836 | orchestrator | Sunday 22 June 2025 19:31:17 +0000 (0:00:00.559) 0:00:10.215 *********** 2025-06-22 19:31:17.339060 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:31:17.364962 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:31:17.405470 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:31:17.406608 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:31:17.407419 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:31:17.408202 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:31:17.409126 | orchestrator | 2025-06-22 19:31:17.410102 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-06-22 19:31:17.410863 | orchestrator | Sunday 22 June 2025 19:31:17 +0000 (0:00:00.150) 0:00:10.366 *********** 2025-06-22 19:31:18.106093 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-22 19:31:18.106828 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:31:18.107719 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-06-22 19:31:18.108830 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:31:18.110086 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-22 19:31:18.110852 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:31:18.112110 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-22 19:31:18.115785 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-22 19:31:18.116203 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:31:18.116614 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:31:18.116920 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-06-22 19:31:18.117273 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:31:18.117548 | orchestrator | 2025-06-22 19:31:18.117942 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-06-22 19:31:18.119873 | orchestrator | Sunday 22 June 2025 19:31:18 +0000 (0:00:00.698) 0:00:11.064 *********** 2025-06-22 19:31:18.157351 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:31:18.177262 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:31:18.198870 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:31:18.247198 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:31:18.248484 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:31:18.250243 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:31:18.251727 | orchestrator | 2025-06-22 19:31:18.253264 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-06-22 19:31:18.254222 | orchestrator | Sunday 22 June 2025 19:31:18 +0000 (0:00:00.142) 0:00:11.207 *********** 2025-06-22 19:31:18.292375 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:31:18.310864 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:31:18.353405 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:31:18.389169 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:31:18.390508 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:31:18.391866 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:31:18.392925 | orchestrator | 2025-06-22 19:31:18.393653 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-06-22 19:31:18.394368 | orchestrator | Sunday 22 June 2025 19:31:18 +0000 (0:00:00.141) 0:00:11.348 *********** 2025-06-22 19:31:18.453542 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:31:18.475012 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:31:18.499456 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:31:18.560733 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:31:18.561640 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:31:18.562987 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:31:18.563492 | orchestrator | 2025-06-22 19:31:18.564906 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-06-22 19:31:18.565546 | orchestrator | Sunday 22 June 2025 19:31:18 +0000 (0:00:00.171) 0:00:11.520 *********** 2025-06-22 19:31:19.255426 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:31:19.256001 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:31:19.257769 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:31:19.258724 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:31:19.260125 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:31:19.261520 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:31:19.262494 | orchestrator | 2025-06-22 19:31:19.263764 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-06-22 19:31:19.264483 | orchestrator | Sunday 22 June 2025 19:31:19 +0000 (0:00:00.692) 0:00:12.212 *********** 2025-06-22 19:31:19.341134 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:31:19.362651 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:31:19.483809 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:31:19.484676 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:31:19.485787 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:31:19.486838 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:31:19.488072 | orchestrator | 2025-06-22 19:31:19.489670 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:31:19.489777 | orchestrator | 2025-06-22 19:31:19 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-22 19:31:19.489796 | orchestrator | 2025-06-22 19:31:19 | INFO  | Please wait and do not abort execution. 2025-06-22 19:31:19.490593 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-22 19:31:19.491648 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-22 19:31:19.492474 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-22 19:31:19.493815 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-22 19:31:19.494643 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-22 19:31:19.495288 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-22 19:31:19.496225 | orchestrator | 2025-06-22 19:31:19.497733 | orchestrator | 2025-06-22 19:31:19.500715 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:31:19.501831 | orchestrator | Sunday 22 June 2025 19:31:19 +0000 (0:00:00.231) 0:00:12.444 *********** 2025-06-22 19:31:19.503766 | orchestrator | =============================================================================== 2025-06-22 19:31:19.505986 | orchestrator | Gathering Facts --------------------------------------------------------- 3.23s 2025-06-22 19:31:19.506959 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.29s 2025-06-22 19:31:19.507981 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.25s 2025-06-22 19:31:19.508725 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.16s 2025-06-22 19:31:19.510081 | orchestrator | Do not require tty for all users ---------------------------------------- 0.88s 2025-06-22 19:31:19.510689 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.74s 2025-06-22 19:31:19.511507 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.70s 2025-06-22 19:31:19.512530 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.69s 2025-06-22 19:31:19.513251 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.60s 2025-06-22 19:31:19.513933 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.56s 2025-06-22 19:31:19.514873 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.23s 2025-06-22 19:31:19.515680 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.20s 2025-06-22 19:31:19.516240 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.17s 2025-06-22 19:31:19.516695 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.16s 2025-06-22 19:31:19.517814 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.15s 2025-06-22 19:31:19.517943 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.14s 2025-06-22 19:31:19.518849 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.14s 2025-06-22 19:31:19.961724 | orchestrator | + osism apply --environment custom facts 2025-06-22 19:31:21.808878 | orchestrator | 2025-06-22 19:31:21 | INFO  | Trying to run play facts in environment custom 2025-06-22 19:31:21.813615 | orchestrator | Registering Redlock._acquired_script 2025-06-22 19:31:21.813700 | orchestrator | Registering Redlock._extend_script 2025-06-22 19:31:21.813716 | orchestrator | Registering Redlock._release_script 2025-06-22 19:31:21.884541 | orchestrator | 2025-06-22 19:31:21 | INFO  | Task 009de160-1f67-45e6-9628-bc34452e5730 (facts) was prepared for execution. 2025-06-22 19:31:21.884676 | orchestrator | 2025-06-22 19:31:21 | INFO  | It takes a moment until task 009de160-1f67-45e6-9628-bc34452e5730 (facts) has been started and output is visible here. 2025-06-22 19:31:25.843798 | orchestrator | 2025-06-22 19:31:25.848018 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-06-22 19:31:25.848412 | orchestrator | 2025-06-22 19:31:25.849933 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-06-22 19:31:25.850917 | orchestrator | Sunday 22 June 2025 19:31:25 +0000 (0:00:00.089) 0:00:00.089 *********** 2025-06-22 19:31:27.263232 | orchestrator | ok: [testbed-manager] 2025-06-22 19:31:27.264429 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:31:27.264774 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:31:27.265917 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:31:27.266461 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:31:27.267370 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:31:27.267783 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:31:27.268315 | orchestrator | 2025-06-22 19:31:27.269080 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-06-22 19:31:27.269478 | orchestrator | Sunday 22 June 2025 19:31:27 +0000 (0:00:01.420) 0:00:01.510 *********** 2025-06-22 19:31:28.443969 | orchestrator | ok: [testbed-manager] 2025-06-22 19:31:28.444080 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:31:28.444124 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:31:28.444347 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:31:28.444367 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:31:28.445272 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:31:28.445751 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:31:28.446760 | orchestrator | 2025-06-22 19:31:28.447373 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-06-22 19:31:28.448020 | orchestrator | 2025-06-22 19:31:28.448457 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-06-22 19:31:28.449142 | orchestrator | Sunday 22 June 2025 19:31:28 +0000 (0:00:01.178) 0:00:02.689 *********** 2025-06-22 19:31:28.581232 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:31:28.581388 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:31:28.581970 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:31:28.584074 | orchestrator | 2025-06-22 19:31:28.584121 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-06-22 19:31:28.584180 | orchestrator | Sunday 22 June 2025 19:31:28 +0000 (0:00:00.140) 0:00:02.830 *********** 2025-06-22 19:31:28.793651 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:31:28.794506 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:31:28.798307 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:31:28.798625 | orchestrator | 2025-06-22 19:31:28.799742 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-06-22 19:31:28.800554 | orchestrator | Sunday 22 June 2025 19:31:28 +0000 (0:00:00.212) 0:00:03.043 *********** 2025-06-22 19:31:28.991925 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:31:28.995190 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:31:28.995245 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:31:28.995257 | orchestrator | 2025-06-22 19:31:28.995822 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-06-22 19:31:28.996512 | orchestrator | Sunday 22 June 2025 19:31:28 +0000 (0:00:00.196) 0:00:03.240 *********** 2025-06-22 19:31:29.136549 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 19:31:29.140217 | orchestrator | 2025-06-22 19:31:29.141122 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-06-22 19:31:29.141417 | orchestrator | Sunday 22 June 2025 19:31:29 +0000 (0:00:00.144) 0:00:03.384 *********** 2025-06-22 19:31:29.572898 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:31:29.573029 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:31:29.575754 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:31:29.575902 | orchestrator | 2025-06-22 19:31:29.576296 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-06-22 19:31:29.576774 | orchestrator | Sunday 22 June 2025 19:31:29 +0000 (0:00:00.438) 0:00:03.822 *********** 2025-06-22 19:31:29.686844 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:31:29.687005 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:31:29.687791 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:31:29.688424 | orchestrator | 2025-06-22 19:31:29.688880 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-06-22 19:31:29.689348 | orchestrator | Sunday 22 June 2025 19:31:29 +0000 (0:00:00.112) 0:00:03.935 *********** 2025-06-22 19:31:30.749490 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:31:30.749720 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:31:30.749743 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:31:30.750161 | orchestrator | 2025-06-22 19:31:30.751087 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-06-22 19:31:30.751806 | orchestrator | Sunday 22 June 2025 19:31:30 +0000 (0:00:01.061) 0:00:04.997 *********** 2025-06-22 19:31:31.207384 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:31:31.209195 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:31:31.210279 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:31:31.211870 | orchestrator | 2025-06-22 19:31:31.213127 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-06-22 19:31:31.214320 | orchestrator | Sunday 22 June 2025 19:31:31 +0000 (0:00:00.457) 0:00:05.454 *********** 2025-06-22 19:31:32.294625 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:31:32.294883 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:31:32.295925 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:31:32.296489 | orchestrator | 2025-06-22 19:31:32.297313 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-06-22 19:31:32.298234 | orchestrator | Sunday 22 June 2025 19:31:32 +0000 (0:00:01.084) 0:00:06.539 *********** 2025-06-22 19:31:45.925878 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:31:45.926080 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:31:45.926104 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:31:45.927910 | orchestrator | 2025-06-22 19:31:45.929504 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-06-22 19:31:45.929555 | orchestrator | Sunday 22 June 2025 19:31:45 +0000 (0:00:13.631) 0:00:20.171 *********** 2025-06-22 19:31:46.049225 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:31:46.050127 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:31:46.050642 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:31:46.051448 | orchestrator | 2025-06-22 19:31:46.052382 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-06-22 19:31:46.052929 | orchestrator | Sunday 22 June 2025 19:31:46 +0000 (0:00:00.123) 0:00:20.294 *********** 2025-06-22 19:31:53.486622 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:31:53.489638 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:31:53.490291 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:31:53.491179 | orchestrator | 2025-06-22 19:31:53.491804 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-06-22 19:31:53.492878 | orchestrator | Sunday 22 June 2025 19:31:53 +0000 (0:00:07.437) 0:00:27.732 *********** 2025-06-22 19:31:53.929498 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:31:53.930179 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:31:53.931496 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:31:53.932511 | orchestrator | 2025-06-22 19:31:53.933896 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-06-22 19:31:53.934736 | orchestrator | Sunday 22 June 2025 19:31:53 +0000 (0:00:00.446) 0:00:28.179 *********** 2025-06-22 19:31:57.476432 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-06-22 19:31:57.476654 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-06-22 19:31:57.476674 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-06-22 19:31:57.476755 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-06-22 19:31:57.478579 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-06-22 19:31:57.478987 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-06-22 19:31:57.480034 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-06-22 19:31:57.480828 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-06-22 19:31:57.481546 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-06-22 19:31:57.482416 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-06-22 19:31:57.482555 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-06-22 19:31:57.483482 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-06-22 19:31:57.484015 | orchestrator | 2025-06-22 19:31:57.484399 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-06-22 19:31:57.484736 | orchestrator | Sunday 22 June 2025 19:31:57 +0000 (0:00:03.543) 0:00:31.722 *********** 2025-06-22 19:31:58.745629 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:31:58.745915 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:31:58.747067 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:31:58.748792 | orchestrator | 2025-06-22 19:31:58.749703 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-22 19:31:58.750554 | orchestrator | 2025-06-22 19:31:58.751986 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-22 19:31:58.752520 | orchestrator | Sunday 22 June 2025 19:31:58 +0000 (0:00:01.270) 0:00:32.992 *********** 2025-06-22 19:32:02.664469 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:32:02.664587 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:32:02.664838 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:32:02.665889 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:32:02.667006 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:32:02.667291 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:32:02.668802 | orchestrator | ok: [testbed-manager] 2025-06-22 19:32:02.668968 | orchestrator | 2025-06-22 19:32:02.670305 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:32:02.670462 | orchestrator | 2025-06-22 19:32:02 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-22 19:32:02.670537 | orchestrator | 2025-06-22 19:32:02 | INFO  | Please wait and do not abort execution. 2025-06-22 19:32:02.671433 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:32:02.672241 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:32:02.672751 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:32:02.673551 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:32:02.674374 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:32:02.675331 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:32:02.676511 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:32:02.676740 | orchestrator | 2025-06-22 19:32:02.678290 | orchestrator | 2025-06-22 19:32:02.679151 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:32:02.680195 | orchestrator | Sunday 22 June 2025 19:32:02 +0000 (0:00:03.919) 0:00:36.911 *********** 2025-06-22 19:32:02.681468 | orchestrator | =============================================================================== 2025-06-22 19:32:02.682418 | orchestrator | osism.commons.repository : Update package cache ------------------------ 13.63s 2025-06-22 19:32:02.683195 | orchestrator | Install required packages (Debian) -------------------------------------- 7.44s 2025-06-22 19:32:02.684156 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.92s 2025-06-22 19:32:02.684958 | orchestrator | Copy fact files --------------------------------------------------------- 3.54s 2025-06-22 19:32:02.685957 | orchestrator | Create custom facts directory ------------------------------------------- 1.42s 2025-06-22 19:32:02.686943 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.27s 2025-06-22 19:32:02.687633 | orchestrator | Copy fact file ---------------------------------------------------------- 1.18s 2025-06-22 19:32:02.688579 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.08s 2025-06-22 19:32:02.689365 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.06s 2025-06-22 19:32:02.689927 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.46s 2025-06-22 19:32:02.690822 | orchestrator | Create custom facts directory ------------------------------------------- 0.45s 2025-06-22 19:32:02.691616 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.44s 2025-06-22 19:32:02.692772 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.21s 2025-06-22 19:32:02.693683 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.20s 2025-06-22 19:32:02.694666 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.14s 2025-06-22 19:32:02.695521 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.14s 2025-06-22 19:32:02.696504 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.12s 2025-06-22 19:32:02.697635 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.11s 2025-06-22 19:32:03.090673 | orchestrator | + osism apply bootstrap 2025-06-22 19:32:04.782684 | orchestrator | Registering Redlock._acquired_script 2025-06-22 19:32:04.782796 | orchestrator | Registering Redlock._extend_script 2025-06-22 19:32:04.782812 | orchestrator | Registering Redlock._release_script 2025-06-22 19:32:04.839658 | orchestrator | 2025-06-22 19:32:04 | INFO  | Task 32767c44-5c3d-49b0-8572-66cc02f9f53f (bootstrap) was prepared for execution. 2025-06-22 19:32:04.839773 | orchestrator | 2025-06-22 19:32:04 | INFO  | It takes a moment until task 32767c44-5c3d-49b0-8572-66cc02f9f53f (bootstrap) has been started and output is visible here. 2025-06-22 19:32:08.977673 | orchestrator | 2025-06-22 19:32:08.979379 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-06-22 19:32:08.980279 | orchestrator | 2025-06-22 19:32:08.982111 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-06-22 19:32:08.984406 | orchestrator | Sunday 22 June 2025 19:32:08 +0000 (0:00:00.185) 0:00:00.185 *********** 2025-06-22 19:32:09.059903 | orchestrator | ok: [testbed-manager] 2025-06-22 19:32:09.086915 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:32:09.114856 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:32:09.138538 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:32:09.234775 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:32:09.235474 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:32:09.237154 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:32:09.237193 | orchestrator | 2025-06-22 19:32:09.239180 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-22 19:32:09.240208 | orchestrator | 2025-06-22 19:32:09.241483 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-22 19:32:09.242129 | orchestrator | Sunday 22 June 2025 19:32:09 +0000 (0:00:00.260) 0:00:00.446 *********** 2025-06-22 19:32:13.201770 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:32:13.203377 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:32:13.203423 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:32:13.203436 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:32:13.203447 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:32:13.203458 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:32:13.203521 | orchestrator | ok: [testbed-manager] 2025-06-22 19:32:13.204054 | orchestrator | 2025-06-22 19:32:13.204839 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-06-22 19:32:13.205678 | orchestrator | 2025-06-22 19:32:13.206084 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-22 19:32:13.206392 | orchestrator | Sunday 22 June 2025 19:32:13 +0000 (0:00:03.964) 0:00:04.411 *********** 2025-06-22 19:32:13.285409 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-06-22 19:32:13.319479 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-06-22 19:32:13.319723 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-06-22 19:32:13.320382 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-06-22 19:32:13.320546 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-06-22 19:32:13.321268 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-22 19:32:13.367643 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-22 19:32:13.369219 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-06-22 19:32:13.370443 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-06-22 19:32:13.370857 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-22 19:32:13.371191 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-06-22 19:32:13.371700 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-06-22 19:32:13.418456 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-06-22 19:32:13.418542 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-06-22 19:32:13.418709 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-06-22 19:32:13.419151 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-06-22 19:32:13.421708 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-06-22 19:32:13.421935 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-06-22 19:32:13.689153 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-06-22 19:32:13.689970 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:32:13.690697 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-06-22 19:32:13.693462 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-06-22 19:32:13.693502 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-06-22 19:32:13.693514 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-06-22 19:32:13.693525 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:32:13.693538 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-06-22 19:32:13.694244 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-06-22 19:32:13.694466 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-06-22 19:32:13.695188 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-06-22 19:32:13.696274 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-06-22 19:32:13.696803 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-06-22 19:32:13.696910 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-06-22 19:32:13.697411 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:32:13.698124 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-06-22 19:32:13.699130 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-06-22 19:32:13.699645 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-06-22 19:32:13.701471 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-06-22 19:32:13.701504 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:32:13.701798 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-22 19:32:13.702530 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-06-22 19:32:13.703070 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-06-22 19:32:13.704020 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-06-22 19:32:13.704694 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-22 19:32:13.705687 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-06-22 19:32:13.706151 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-06-22 19:32:13.706933 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-22 19:32:13.707603 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:32:13.708382 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-06-22 19:32:13.709035 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-06-22 19:32:13.709795 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-06-22 19:32:13.710514 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-06-22 19:32:13.711017 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:32:13.711705 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-06-22 19:32:13.712368 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-06-22 19:32:13.713168 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-06-22 19:32:13.713647 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:32:13.714312 | orchestrator | 2025-06-22 19:32:13.715119 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-06-22 19:32:13.715357 | orchestrator | 2025-06-22 19:32:13.715930 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-06-22 19:32:13.716443 | orchestrator | Sunday 22 June 2025 19:32:13 +0000 (0:00:00.490) 0:00:04.902 *********** 2025-06-22 19:32:14.968079 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:32:14.968192 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:32:14.968208 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:32:14.968220 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:32:14.968855 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:32:14.969739 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:32:14.971232 | orchestrator | ok: [testbed-manager] 2025-06-22 19:32:14.972406 | orchestrator | 2025-06-22 19:32:14.973811 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-06-22 19:32:14.974814 | orchestrator | Sunday 22 June 2025 19:32:14 +0000 (0:00:01.274) 0:00:06.176 *********** 2025-06-22 19:32:16.169851 | orchestrator | ok: [testbed-manager] 2025-06-22 19:32:16.170085 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:32:16.170611 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:32:16.171240 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:32:16.171743 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:32:16.172688 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:32:16.174429 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:32:16.175379 | orchestrator | 2025-06-22 19:32:16.176391 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-06-22 19:32:16.177247 | orchestrator | Sunday 22 June 2025 19:32:16 +0000 (0:00:01.204) 0:00:07.380 *********** 2025-06-22 19:32:16.458692 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:32:16.459494 | orchestrator | 2025-06-22 19:32:16.460464 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-06-22 19:32:16.461363 | orchestrator | Sunday 22 June 2025 19:32:16 +0000 (0:00:00.289) 0:00:07.670 *********** 2025-06-22 19:32:18.474534 | orchestrator | changed: [testbed-manager] 2025-06-22 19:32:18.474733 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:32:18.474759 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:32:18.474856 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:32:18.477907 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:32:18.480025 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:32:18.481853 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:32:18.483318 | orchestrator | 2025-06-22 19:32:18.485029 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-06-22 19:32:18.485981 | orchestrator | Sunday 22 June 2025 19:32:18 +0000 (0:00:02.009) 0:00:09.679 *********** 2025-06-22 19:32:18.544010 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:32:18.791282 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:32:18.791491 | orchestrator | 2025-06-22 19:32:18.795114 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-06-22 19:32:18.795174 | orchestrator | Sunday 22 June 2025 19:32:18 +0000 (0:00:00.322) 0:00:10.002 *********** 2025-06-22 19:32:19.896856 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:32:19.897128 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:32:19.898906 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:32:19.900264 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:32:19.901459 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:32:19.902385 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:32:19.903477 | orchestrator | 2025-06-22 19:32:19.904200 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-06-22 19:32:19.905062 | orchestrator | Sunday 22 June 2025 19:32:19 +0000 (0:00:01.104) 0:00:11.107 *********** 2025-06-22 19:32:19.979673 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:32:20.433053 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:32:20.433385 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:32:20.434486 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:32:20.435226 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:32:20.436475 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:32:20.437630 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:32:20.438003 | orchestrator | 2025-06-22 19:32:20.439267 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-06-22 19:32:20.440080 | orchestrator | Sunday 22 June 2025 19:32:20 +0000 (0:00:00.536) 0:00:11.643 *********** 2025-06-22 19:32:20.535092 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:32:20.552796 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:32:20.582904 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:32:20.844220 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:32:20.845312 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:32:20.846888 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:32:20.848197 | orchestrator | ok: [testbed-manager] 2025-06-22 19:32:20.849146 | orchestrator | 2025-06-22 19:32:20.850326 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-06-22 19:32:20.851198 | orchestrator | Sunday 22 June 2025 19:32:20 +0000 (0:00:00.412) 0:00:12.056 *********** 2025-06-22 19:32:20.935484 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:32:20.961207 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:32:20.981731 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:32:21.005182 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:32:21.076908 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:32:21.077958 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:32:21.078801 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:32:21.079097 | orchestrator | 2025-06-22 19:32:21.080150 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-06-22 19:32:21.080424 | orchestrator | Sunday 22 June 2025 19:32:21 +0000 (0:00:00.232) 0:00:12.289 *********** 2025-06-22 19:32:21.380619 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:32:21.384281 | orchestrator | 2025-06-22 19:32:21.384316 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-06-22 19:32:21.384326 | orchestrator | Sunday 22 June 2025 19:32:21 +0000 (0:00:00.301) 0:00:12.590 *********** 2025-06-22 19:32:21.707843 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:32:21.708620 | orchestrator | 2025-06-22 19:32:21.712455 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-06-22 19:32:21.714588 | orchestrator | Sunday 22 June 2025 19:32:21 +0000 (0:00:00.327) 0:00:12.917 *********** 2025-06-22 19:32:23.135902 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:32:23.136868 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:32:23.137744 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:32:23.138880 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:32:23.140303 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:32:23.142457 | orchestrator | ok: [testbed-manager] 2025-06-22 19:32:23.143798 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:32:23.144438 | orchestrator | 2025-06-22 19:32:23.145123 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-06-22 19:32:23.145432 | orchestrator | Sunday 22 June 2025 19:32:23 +0000 (0:00:01.428) 0:00:14.345 *********** 2025-06-22 19:32:23.216126 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:32:23.242457 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:32:23.267994 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:32:23.294614 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:32:23.355258 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:32:23.355771 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:32:23.357490 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:32:23.358838 | orchestrator | 2025-06-22 19:32:23.359790 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-06-22 19:32:23.360704 | orchestrator | Sunday 22 June 2025 19:32:23 +0000 (0:00:00.220) 0:00:14.566 *********** 2025-06-22 19:32:23.874740 | orchestrator | ok: [testbed-manager] 2025-06-22 19:32:23.874903 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:32:23.877876 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:32:23.877925 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:32:23.877940 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:32:23.879531 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:32:23.880490 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:32:23.881761 | orchestrator | 2025-06-22 19:32:23.882510 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-06-22 19:32:23.883132 | orchestrator | Sunday 22 June 2025 19:32:23 +0000 (0:00:00.518) 0:00:15.084 *********** 2025-06-22 19:32:23.979936 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:32:24.007263 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:32:24.031510 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:32:24.109107 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:32:24.110116 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:32:24.114225 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:32:24.114328 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:32:24.114349 | orchestrator | 2025-06-22 19:32:24.114437 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-06-22 19:32:24.116154 | orchestrator | Sunday 22 June 2025 19:32:24 +0000 (0:00:00.236) 0:00:15.321 *********** 2025-06-22 19:32:24.695454 | orchestrator | ok: [testbed-manager] 2025-06-22 19:32:24.697341 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:32:24.700954 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:32:24.701793 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:32:24.702407 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:32:24.703245 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:32:24.704592 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:32:24.706845 | orchestrator | 2025-06-22 19:32:24.706873 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-06-22 19:32:24.707055 | orchestrator | Sunday 22 June 2025 19:32:24 +0000 (0:00:00.582) 0:00:15.904 *********** 2025-06-22 19:32:25.775974 | orchestrator | ok: [testbed-manager] 2025-06-22 19:32:25.777254 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:32:25.778386 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:32:25.779540 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:32:25.781042 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:32:25.781886 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:32:25.784722 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:32:25.785356 | orchestrator | 2025-06-22 19:32:25.786153 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-06-22 19:32:25.787001 | orchestrator | Sunday 22 June 2025 19:32:25 +0000 (0:00:01.081) 0:00:16.985 *********** 2025-06-22 19:32:26.916417 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:32:26.916523 | orchestrator | ok: [testbed-manager] 2025-06-22 19:32:26.916980 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:32:26.917340 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:32:26.917788 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:32:26.919875 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:32:26.920695 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:32:26.921178 | orchestrator | 2025-06-22 19:32:26.922120 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-06-22 19:32:26.923862 | orchestrator | Sunday 22 June 2025 19:32:26 +0000 (0:00:01.141) 0:00:18.127 *********** 2025-06-22 19:32:27.299875 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:32:27.301810 | orchestrator | 2025-06-22 19:32:27.303860 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-06-22 19:32:27.304949 | orchestrator | Sunday 22 June 2025 19:32:27 +0000 (0:00:00.383) 0:00:18.511 *********** 2025-06-22 19:32:27.370991 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:32:28.604459 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:32:28.605298 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:32:28.606499 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:32:28.607710 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:32:28.609056 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:32:28.609866 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:32:28.611475 | orchestrator | 2025-06-22 19:32:28.612155 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-06-22 19:32:28.612771 | orchestrator | Sunday 22 June 2025 19:32:28 +0000 (0:00:01.302) 0:00:19.813 *********** 2025-06-22 19:32:28.683879 | orchestrator | ok: [testbed-manager] 2025-06-22 19:32:28.707520 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:32:28.732302 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:32:28.760055 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:32:28.837275 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:32:28.837977 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:32:28.839151 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:32:28.842255 | orchestrator | 2025-06-22 19:32:28.843276 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-06-22 19:32:28.844523 | orchestrator | Sunday 22 June 2025 19:32:28 +0000 (0:00:00.233) 0:00:20.047 *********** 2025-06-22 19:32:28.920139 | orchestrator | ok: [testbed-manager] 2025-06-22 19:32:28.956636 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:32:28.988092 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:32:29.012546 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:32:29.102163 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:32:29.102814 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:32:29.103877 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:32:29.104531 | orchestrator | 2025-06-22 19:32:29.106113 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-06-22 19:32:29.106183 | orchestrator | Sunday 22 June 2025 19:32:29 +0000 (0:00:00.267) 0:00:20.314 *********** 2025-06-22 19:32:29.220312 | orchestrator | ok: [testbed-manager] 2025-06-22 19:32:29.244808 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:32:29.271019 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:32:29.352605 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:32:29.353306 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:32:29.354875 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:32:29.356469 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:32:29.360592 | orchestrator | 2025-06-22 19:32:29.361769 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-06-22 19:32:29.362298 | orchestrator | Sunday 22 June 2025 19:32:29 +0000 (0:00:00.250) 0:00:20.564 *********** 2025-06-22 19:32:29.650932 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:32:29.652841 | orchestrator | 2025-06-22 19:32:29.656183 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-06-22 19:32:29.657266 | orchestrator | Sunday 22 June 2025 19:32:29 +0000 (0:00:00.296) 0:00:20.861 *********** 2025-06-22 19:32:30.245496 | orchestrator | ok: [testbed-manager] 2025-06-22 19:32:30.251267 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:32:30.251960 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:32:30.252062 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:32:30.253223 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:32:30.254055 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:32:30.254705 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:32:30.256112 | orchestrator | 2025-06-22 19:32:30.256937 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-06-22 19:32:30.257569 | orchestrator | Sunday 22 June 2025 19:32:30 +0000 (0:00:00.594) 0:00:21.455 *********** 2025-06-22 19:32:30.323653 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:32:30.349428 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:32:30.374110 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:32:30.403045 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:32:30.471668 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:32:30.472062 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:32:30.473016 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:32:30.474169 | orchestrator | 2025-06-22 19:32:30.474763 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-06-22 19:32:30.475503 | orchestrator | Sunday 22 June 2025 19:32:30 +0000 (0:00:00.228) 0:00:21.683 *********** 2025-06-22 19:32:31.549526 | orchestrator | ok: [testbed-manager] 2025-06-22 19:32:31.550962 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:32:31.552207 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:32:31.554711 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:32:31.554743 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:32:31.555765 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:32:31.557032 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:32:31.558761 | orchestrator | 2025-06-22 19:32:31.559680 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-06-22 19:32:31.562717 | orchestrator | Sunday 22 June 2025 19:32:31 +0000 (0:00:01.075) 0:00:22.758 *********** 2025-06-22 19:32:32.112664 | orchestrator | ok: [testbed-manager] 2025-06-22 19:32:32.113233 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:32:32.114827 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:32:32.115741 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:32:32.116596 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:32:32.117579 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:32:32.118669 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:32:32.119127 | orchestrator | 2025-06-22 19:32:32.119932 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-06-22 19:32:32.120655 | orchestrator | Sunday 22 June 2025 19:32:32 +0000 (0:00:00.564) 0:00:23.323 *********** 2025-06-22 19:32:33.237549 | orchestrator | ok: [testbed-manager] 2025-06-22 19:32:33.237774 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:32:33.241080 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:32:33.241180 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:32:33.241339 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:32:33.242422 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:32:33.243121 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:32:33.243861 | orchestrator | 2025-06-22 19:32:33.244633 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-06-22 19:32:33.245618 | orchestrator | Sunday 22 June 2025 19:32:33 +0000 (0:00:01.124) 0:00:24.447 *********** 2025-06-22 19:32:47.033290 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:32:47.033532 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:32:47.034175 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:32:47.036382 | orchestrator | changed: [testbed-manager] 2025-06-22 19:32:47.037989 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:32:47.038835 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:32:47.039792 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:32:47.040934 | orchestrator | 2025-06-22 19:32:47.041852 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-06-22 19:32:47.042386 | orchestrator | Sunday 22 June 2025 19:32:47 +0000 (0:00:13.794) 0:00:38.241 *********** 2025-06-22 19:32:47.110431 | orchestrator | ok: [testbed-manager] 2025-06-22 19:32:47.135058 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:32:47.164902 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:32:47.201110 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:32:47.265008 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:32:47.265350 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:32:47.266361 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:32:47.266635 | orchestrator | 2025-06-22 19:32:47.267427 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-06-22 19:32:47.267684 | orchestrator | Sunday 22 June 2025 19:32:47 +0000 (0:00:00.235) 0:00:38.477 *********** 2025-06-22 19:32:47.346538 | orchestrator | ok: [testbed-manager] 2025-06-22 19:32:47.373011 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:32:47.403866 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:32:47.429057 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:32:47.506354 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:32:47.507008 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:32:47.508188 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:32:47.508211 | orchestrator | 2025-06-22 19:32:47.510110 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-06-22 19:32:47.511073 | orchestrator | Sunday 22 June 2025 19:32:47 +0000 (0:00:00.235) 0:00:38.713 *********** 2025-06-22 19:32:47.581166 | orchestrator | ok: [testbed-manager] 2025-06-22 19:32:47.607178 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:32:47.628596 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:32:47.652777 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:32:47.724816 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:32:47.725936 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:32:47.726488 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:32:47.728282 | orchestrator | 2025-06-22 19:32:47.729173 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-06-22 19:32:47.730531 | orchestrator | Sunday 22 June 2025 19:32:47 +0000 (0:00:00.222) 0:00:38.936 *********** 2025-06-22 19:32:48.018256 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:32:48.019423 | orchestrator | 2025-06-22 19:32:48.019942 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-06-22 19:32:48.020779 | orchestrator | Sunday 22 June 2025 19:32:48 +0000 (0:00:00.294) 0:00:39.230 *********** 2025-06-22 19:32:49.697191 | orchestrator | ok: [testbed-manager] 2025-06-22 19:32:49.697527 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:32:49.697597 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:32:49.697611 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:32:49.697998 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:32:49.699957 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:32:49.701326 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:32:49.702670 | orchestrator | 2025-06-22 19:32:49.703865 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-06-22 19:32:49.704452 | orchestrator | Sunday 22 June 2025 19:32:49 +0000 (0:00:01.674) 0:00:40.904 *********** 2025-06-22 19:32:50.768502 | orchestrator | changed: [testbed-manager] 2025-06-22 19:32:50.769868 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:32:50.770970 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:32:50.772487 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:32:50.773400 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:32:50.774617 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:32:50.775782 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:32:50.776008 | orchestrator | 2025-06-22 19:32:50.777081 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-06-22 19:32:50.777486 | orchestrator | Sunday 22 June 2025 19:32:50 +0000 (0:00:01.074) 0:00:41.978 *********** 2025-06-22 19:32:51.568765 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:32:51.570340 | orchestrator | ok: [testbed-manager] 2025-06-22 19:32:51.570686 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:32:51.571834 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:32:51.572637 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:32:51.573882 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:32:51.574624 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:32:51.575804 | orchestrator | 2025-06-22 19:32:51.577427 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-06-22 19:32:51.578391 | orchestrator | Sunday 22 June 2025 19:32:51 +0000 (0:00:00.800) 0:00:42.779 *********** 2025-06-22 19:32:51.904506 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:32:51.905220 | orchestrator | 2025-06-22 19:32:51.906849 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-06-22 19:32:51.908049 | orchestrator | Sunday 22 June 2025 19:32:51 +0000 (0:00:00.335) 0:00:43.114 *********** 2025-06-22 19:32:52.888362 | orchestrator | changed: [testbed-manager] 2025-06-22 19:32:52.888815 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:32:52.890532 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:32:52.892097 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:32:52.892353 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:32:52.894245 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:32:52.894424 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:32:52.895957 | orchestrator | 2025-06-22 19:32:52.896782 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-06-22 19:32:52.897815 | orchestrator | Sunday 22 June 2025 19:32:52 +0000 (0:00:00.983) 0:00:44.098 *********** 2025-06-22 19:32:52.989818 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:32:53.009828 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:32:53.037204 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:32:53.209948 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:32:53.210875 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:32:53.212652 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:32:53.213606 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:32:53.214434 | orchestrator | 2025-06-22 19:32:53.215212 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-06-22 19:32:53.216087 | orchestrator | Sunday 22 June 2025 19:32:53 +0000 (0:00:00.322) 0:00:44.421 *********** 2025-06-22 19:33:05.030501 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:33:05.030656 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:33:05.030673 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:33:05.030685 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:33:05.031152 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:33:05.032258 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:33:05.033039 | orchestrator | changed: [testbed-manager] 2025-06-22 19:33:05.034163 | orchestrator | 2025-06-22 19:33:05.035068 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-06-22 19:33:05.036015 | orchestrator | Sunday 22 June 2025 19:33:05 +0000 (0:00:11.813) 0:00:56.235 *********** 2025-06-22 19:33:06.666225 | orchestrator | ok: [testbed-manager] 2025-06-22 19:33:06.666462 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:33:06.667141 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:33:06.667824 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:33:06.668757 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:33:06.669233 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:33:06.670013 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:33:06.670736 | orchestrator | 2025-06-22 19:33:06.671749 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-06-22 19:33:06.672768 | orchestrator | Sunday 22 June 2025 19:33:06 +0000 (0:00:01.641) 0:00:57.876 *********** 2025-06-22 19:33:07.561054 | orchestrator | ok: [testbed-manager] 2025-06-22 19:33:07.563096 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:33:07.563442 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:33:07.566424 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:33:07.566491 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:33:07.567926 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:33:07.569471 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:33:07.571927 | orchestrator | 2025-06-22 19:33:07.573094 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-06-22 19:33:07.573967 | orchestrator | Sunday 22 June 2025 19:33:07 +0000 (0:00:00.887) 0:00:58.764 *********** 2025-06-22 19:33:07.613463 | orchestrator | ok: [testbed-manager] 2025-06-22 19:33:07.666153 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:33:07.693363 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:33:07.721175 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:33:07.785963 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:33:07.786250 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:33:07.787156 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:33:07.788387 | orchestrator | 2025-06-22 19:33:07.789136 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-06-22 19:33:07.790181 | orchestrator | Sunday 22 June 2025 19:33:07 +0000 (0:00:00.234) 0:00:58.998 *********** 2025-06-22 19:33:07.884170 | orchestrator | ok: [testbed-manager] 2025-06-22 19:33:07.908520 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:33:07.936444 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:33:07.996212 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:33:07.997625 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:33:07.999371 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:33:08.000821 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:33:08.002193 | orchestrator | 2025-06-22 19:33:08.003360 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-06-22 19:33:08.004457 | orchestrator | Sunday 22 June 2025 19:33:07 +0000 (0:00:00.209) 0:00:59.208 *********** 2025-06-22 19:33:08.302758 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:33:08.304271 | orchestrator | 2025-06-22 19:33:08.305349 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-06-22 19:33:08.306607 | orchestrator | Sunday 22 June 2025 19:33:08 +0000 (0:00:00.305) 0:00:59.513 *********** 2025-06-22 19:33:09.885961 | orchestrator | ok: [testbed-manager] 2025-06-22 19:33:09.887259 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:33:09.887496 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:33:09.889144 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:33:09.889662 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:33:09.890678 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:33:09.891330 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:33:09.892253 | orchestrator | 2025-06-22 19:33:09.892679 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-06-22 19:33:09.893404 | orchestrator | Sunday 22 June 2025 19:33:09 +0000 (0:00:01.581) 0:01:01.095 *********** 2025-06-22 19:33:10.519163 | orchestrator | changed: [testbed-manager] 2025-06-22 19:33:10.519658 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:33:10.523742 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:33:10.523801 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:33:10.525831 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:33:10.526290 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:33:10.529107 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:33:10.529987 | orchestrator | 2025-06-22 19:33:10.530341 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-06-22 19:33:10.531295 | orchestrator | Sunday 22 June 2025 19:33:10 +0000 (0:00:00.633) 0:01:01.729 *********** 2025-06-22 19:33:10.591448 | orchestrator | ok: [testbed-manager] 2025-06-22 19:33:10.620251 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:33:10.644739 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:33:10.670889 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:33:10.725147 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:33:10.725624 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:33:10.726718 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:33:10.727888 | orchestrator | 2025-06-22 19:33:10.728695 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-06-22 19:33:10.730122 | orchestrator | Sunday 22 June 2025 19:33:10 +0000 (0:00:00.208) 0:01:01.937 *********** 2025-06-22 19:33:12.023700 | orchestrator | ok: [testbed-manager] 2025-06-22 19:33:12.023811 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:33:12.023826 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:33:12.023838 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:33:12.029071 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:33:12.029123 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:33:12.029135 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:33:12.029147 | orchestrator | 2025-06-22 19:33:12.029160 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-06-22 19:33:12.030487 | orchestrator | Sunday 22 June 2025 19:33:12 +0000 (0:00:01.289) 0:01:03.227 *********** 2025-06-22 19:33:13.757764 | orchestrator | changed: [testbed-manager] 2025-06-22 19:33:13.757869 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:33:13.760035 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:33:13.760093 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:33:13.761666 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:33:13.762138 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:33:13.762863 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:33:13.763254 | orchestrator | 2025-06-22 19:33:13.763991 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-06-22 19:33:13.764803 | orchestrator | Sunday 22 June 2025 19:33:13 +0000 (0:00:01.740) 0:01:04.967 *********** 2025-06-22 19:33:16.163968 | orchestrator | ok: [testbed-manager] 2025-06-22 19:33:16.164142 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:33:16.165727 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:33:16.165957 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:33:16.166360 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:33:16.168214 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:33:16.168742 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:33:16.169361 | orchestrator | 2025-06-22 19:33:16.169917 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-06-22 19:33:16.171006 | orchestrator | Sunday 22 June 2025 19:33:16 +0000 (0:00:02.406) 0:01:07.373 *********** 2025-06-22 19:33:51.625106 | orchestrator | ok: [testbed-manager] 2025-06-22 19:33:51.625224 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:33:51.625240 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:33:51.625251 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:33:51.625263 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:33:51.625291 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:33:51.625303 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:33:51.625314 | orchestrator | 2025-06-22 19:33:51.625327 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-06-22 19:33:51.625340 | orchestrator | Sunday 22 June 2025 19:33:51 +0000 (0:00:35.447) 0:01:42.821 *********** 2025-06-22 19:35:08.404094 | orchestrator | changed: [testbed-manager] 2025-06-22 19:35:08.404938 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:35:08.404966 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:35:08.406395 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:35:08.408709 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:35:08.409615 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:35:08.410386 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:35:08.411179 | orchestrator | 2025-06-22 19:35:08.411982 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-06-22 19:35:08.412466 | orchestrator | Sunday 22 June 2025 19:35:08 +0000 (0:01:16.789) 0:02:59.611 *********** 2025-06-22 19:35:10.105534 | orchestrator | ok: [testbed-manager] 2025-06-22 19:35:10.105715 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:35:10.105774 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:35:10.107212 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:35:10.108587 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:35:10.109838 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:35:10.110804 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:35:10.111096 | orchestrator | 2025-06-22 19:35:10.112256 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-06-22 19:35:10.113398 | orchestrator | Sunday 22 June 2025 19:35:10 +0000 (0:00:01.701) 0:03:01.313 *********** 2025-06-22 19:35:22.013813 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:35:22.013983 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:35:22.013999 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:35:22.014212 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:35:22.014887 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:35:22.015712 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:35:22.016221 | orchestrator | changed: [testbed-manager] 2025-06-22 19:35:22.017264 | orchestrator | 2025-06-22 19:35:22.017942 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-06-22 19:35:22.018659 | orchestrator | Sunday 22 June 2025 19:35:21 +0000 (0:00:11.903) 0:03:13.216 *********** 2025-06-22 19:35:22.399233 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-06-22 19:35:22.400144 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-06-22 19:35:22.401241 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-06-22 19:35:22.401961 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-06-22 19:35:22.402886 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-06-22 19:35:22.403395 | orchestrator | 2025-06-22 19:35:22.404099 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-06-22 19:35:22.405891 | orchestrator | Sunday 22 June 2025 19:35:22 +0000 (0:00:00.394) 0:03:13.611 *********** 2025-06-22 19:35:22.456043 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-06-22 19:35:22.481670 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:35:22.481795 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-06-22 19:35:22.526310 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-06-22 19:35:22.527085 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:35:22.527321 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-06-22 19:35:22.552739 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:35:22.580042 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:35:23.103608 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-22 19:35:23.103987 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-22 19:35:23.106574 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-22 19:35:23.106618 | orchestrator | 2025-06-22 19:35:23.107496 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-06-22 19:35:23.108592 | orchestrator | Sunday 22 June 2025 19:35:23 +0000 (0:00:00.702) 0:03:14.313 *********** 2025-06-22 19:35:23.170115 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-06-22 19:35:23.170218 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-06-22 19:35:23.170234 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-06-22 19:35:23.170246 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-06-22 19:35:23.209420 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-06-22 19:35:23.209788 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-06-22 19:35:23.210632 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-06-22 19:35:23.210751 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-06-22 19:35:23.213884 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-06-22 19:35:23.213930 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-06-22 19:35:23.213943 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-06-22 19:35:23.213954 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-06-22 19:35:23.213965 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-06-22 19:35:23.214062 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-06-22 19:35:23.214485 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-06-22 19:35:23.215348 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-06-22 19:35:23.215571 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-06-22 19:35:23.216256 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-06-22 19:35:23.216787 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-06-22 19:35:23.216820 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-06-22 19:35:23.217182 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-06-22 19:35:23.218704 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-06-22 19:35:23.272776 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:35:23.273505 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-06-22 19:35:23.274953 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-06-22 19:35:23.274979 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-06-22 19:35:23.275031 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-06-22 19:35:23.275109 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-06-22 19:35:23.275780 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-06-22 19:35:23.276899 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-06-22 19:35:23.277121 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-06-22 19:35:23.277156 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-06-22 19:35:23.277169 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-06-22 19:35:23.277240 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-06-22 19:35:23.279763 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-06-22 19:35:23.280770 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-06-22 19:35:23.280811 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-06-22 19:35:23.280824 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-06-22 19:35:23.280930 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-06-22 19:35:23.281370 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-06-22 19:35:23.303260 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-06-22 19:35:23.303313 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:35:23.330906 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:35:28.068266 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:35:28.070403 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-06-22 19:35:28.070440 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-06-22 19:35:28.071762 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-06-22 19:35:28.073353 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-06-22 19:35:28.074455 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-06-22 19:35:28.075230 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-06-22 19:35:28.076724 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-06-22 19:35:28.076981 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-06-22 19:35:28.078496 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-06-22 19:35:28.079476 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-06-22 19:35:28.080266 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-06-22 19:35:28.081191 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-06-22 19:35:28.082219 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-06-22 19:35:28.083140 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-06-22 19:35:28.083801 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-06-22 19:35:28.084607 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-06-22 19:35:28.085648 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-06-22 19:35:28.086344 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-06-22 19:35:28.086960 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-06-22 19:35:28.087779 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-06-22 19:35:28.088866 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-06-22 19:35:28.089375 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-06-22 19:35:28.090525 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-06-22 19:35:28.091402 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-06-22 19:35:28.092167 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-06-22 19:35:28.092728 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-06-22 19:35:28.093533 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-06-22 19:35:28.093868 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-06-22 19:35:28.094951 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-06-22 19:35:28.095633 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-06-22 19:35:28.096346 | orchestrator | 2025-06-22 19:35:28.097045 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-06-22 19:35:28.097568 | orchestrator | Sunday 22 June 2025 19:35:28 +0000 (0:00:04.963) 0:03:19.277 *********** 2025-06-22 19:35:28.687901 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-22 19:35:28.688391 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-22 19:35:28.689571 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-22 19:35:28.691327 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-22 19:35:28.692781 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-22 19:35:28.693775 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-22 19:35:28.694812 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-06-22 19:35:28.695654 | orchestrator | 2025-06-22 19:35:28.696905 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-06-22 19:35:28.698065 | orchestrator | Sunday 22 June 2025 19:35:28 +0000 (0:00:00.622) 0:03:19.899 *********** 2025-06-22 19:35:28.745450 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-06-22 19:35:28.769153 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:35:28.846157 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-06-22 19:35:29.211130 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:35:29.212066 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-06-22 19:35:29.213243 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:35:29.214882 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-06-22 19:35:29.216001 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:35:29.217011 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-06-22 19:35:29.217827 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-06-22 19:35:29.218373 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-06-22 19:35:29.219362 | orchestrator | 2025-06-22 19:35:29.220127 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-06-22 19:35:29.220668 | orchestrator | Sunday 22 June 2025 19:35:29 +0000 (0:00:00.521) 0:03:20.421 *********** 2025-06-22 19:35:29.265782 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-06-22 19:35:29.291704 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:35:29.381832 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-06-22 19:35:29.809820 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-06-22 19:35:29.811336 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:35:29.812740 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:35:29.814257 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-06-22 19:35:29.815374 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:35:29.816589 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-06-22 19:35:29.817895 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-06-22 19:35:29.820685 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-06-22 19:35:29.820774 | orchestrator | 2025-06-22 19:35:29.820791 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-06-22 19:35:29.820804 | orchestrator | Sunday 22 June 2025 19:35:29 +0000 (0:00:00.598) 0:03:21.020 *********** 2025-06-22 19:35:29.895209 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:35:29.920535 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:35:29.950918 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:35:29.977054 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:35:30.111999 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:35:30.113471 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:35:30.115006 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:35:30.116818 | orchestrator | 2025-06-22 19:35:30.117685 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-06-22 19:35:30.118790 | orchestrator | Sunday 22 June 2025 19:35:30 +0000 (0:00:00.303) 0:03:21.324 *********** 2025-06-22 19:35:35.391435 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:35:35.392324 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:35:35.393379 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:35:35.394811 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:35:35.395712 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:35:35.396238 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:35:35.397587 | orchestrator | ok: [testbed-manager] 2025-06-22 19:35:35.398398 | orchestrator | 2025-06-22 19:35:35.398970 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-06-22 19:35:35.399944 | orchestrator | Sunday 22 June 2025 19:35:35 +0000 (0:00:05.278) 0:03:26.603 *********** 2025-06-22 19:35:35.475721 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-06-22 19:35:35.475822 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-06-22 19:35:35.509111 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:35:35.509268 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-06-22 19:35:35.545665 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:35:35.547840 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-06-22 19:35:35.591271 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:35:35.592031 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-06-22 19:35:35.622498 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:35:35.689714 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-06-22 19:35:35.690831 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:35:35.692302 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:35:35.692965 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-06-22 19:35:35.693714 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:35:35.694658 | orchestrator | 2025-06-22 19:35:35.695396 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-06-22 19:35:35.696106 | orchestrator | Sunday 22 June 2025 19:35:35 +0000 (0:00:00.298) 0:03:26.901 *********** 2025-06-22 19:35:36.876086 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-06-22 19:35:36.879792 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-06-22 19:35:36.880734 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-06-22 19:35:36.881636 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-06-22 19:35:36.882311 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-06-22 19:35:36.883107 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-06-22 19:35:36.884061 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-06-22 19:35:36.884601 | orchestrator | 2025-06-22 19:35:36.885634 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-06-22 19:35:36.886757 | orchestrator | Sunday 22 June 2025 19:35:36 +0000 (0:00:01.184) 0:03:28.086 *********** 2025-06-22 19:35:37.394876 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:35:37.396705 | orchestrator | 2025-06-22 19:35:37.397038 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-06-22 19:35:37.399992 | orchestrator | Sunday 22 June 2025 19:35:37 +0000 (0:00:00.521) 0:03:28.607 *********** 2025-06-22 19:35:38.860634 | orchestrator | ok: [testbed-manager] 2025-06-22 19:35:38.861780 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:35:38.861852 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:35:38.862456 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:35:38.864293 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:35:38.864760 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:35:38.865715 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:35:38.866235 | orchestrator | 2025-06-22 19:35:38.867364 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-06-22 19:35:38.868947 | orchestrator | Sunday 22 June 2025 19:35:38 +0000 (0:00:01.464) 0:03:30.071 *********** 2025-06-22 19:35:39.497199 | orchestrator | ok: [testbed-manager] 2025-06-22 19:35:39.497303 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:35:39.498335 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:35:39.500381 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:35:39.500989 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:35:39.501633 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:35:39.502153 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:35:39.502737 | orchestrator | 2025-06-22 19:35:39.503243 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-06-22 19:35:39.503874 | orchestrator | Sunday 22 June 2025 19:35:39 +0000 (0:00:00.637) 0:03:30.709 *********** 2025-06-22 19:35:40.169874 | orchestrator | changed: [testbed-manager] 2025-06-22 19:35:40.172909 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:35:40.172956 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:35:40.173480 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:35:40.175477 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:35:40.175502 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:35:40.176417 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:35:40.177384 | orchestrator | 2025-06-22 19:35:40.177967 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-06-22 19:35:40.178762 | orchestrator | Sunday 22 June 2025 19:35:40 +0000 (0:00:00.671) 0:03:31.380 *********** 2025-06-22 19:35:40.891755 | orchestrator | ok: [testbed-manager] 2025-06-22 19:35:40.892244 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:35:40.893651 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:35:40.896099 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:35:40.896154 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:35:40.896891 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:35:40.898739 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:35:40.899616 | orchestrator | 2025-06-22 19:35:40.900992 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-06-22 19:35:40.901594 | orchestrator | Sunday 22 June 2025 19:35:40 +0000 (0:00:00.722) 0:03:32.102 *********** 2025-06-22 19:35:41.970733 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1750619563.455761, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 19:35:41.971294 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1750619614.2049131, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 19:35:41.971336 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1750619615.5223289, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 19:35:41.971776 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1750619620.0159345, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 19:35:41.971809 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1750619613.1631339, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 19:35:41.971952 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1750619626.9322796, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 19:35:41.973425 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1750619619.3177352, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 19:35:41.974625 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1750619593.066667, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 19:35:41.974661 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1750619512.6951609, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 19:35:41.975107 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1750619510.3321, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 19:35:41.975442 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1750619505.2617855, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 19:35:41.976272 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1750619506.0674667, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 19:35:41.976457 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1750619511.7885735, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 19:35:41.976819 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1750619518.0929437, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 19:35:41.977902 | orchestrator | 2025-06-22 19:35:41.977970 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-06-22 19:35:41.978219 | orchestrator | Sunday 22 June 2025 19:35:41 +0000 (0:00:01.078) 0:03:33.181 *********** 2025-06-22 19:35:43.225899 | orchestrator | changed: [testbed-manager] 2025-06-22 19:35:43.226937 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:35:43.228462 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:35:43.229341 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:35:43.230728 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:35:43.231490 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:35:43.232605 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:35:43.233329 | orchestrator | 2025-06-22 19:35:43.234086 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-06-22 19:35:43.234726 | orchestrator | Sunday 22 June 2025 19:35:43 +0000 (0:00:01.254) 0:03:34.435 *********** 2025-06-22 19:35:44.495187 | orchestrator | changed: [testbed-manager] 2025-06-22 19:35:44.496949 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:35:44.498173 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:35:44.498790 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:35:44.499522 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:35:44.500162 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:35:44.501037 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:35:44.501672 | orchestrator | 2025-06-22 19:35:44.504742 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2025-06-22 19:35:44.505908 | orchestrator | Sunday 22 June 2025 19:35:44 +0000 (0:00:01.268) 0:03:35.704 *********** 2025-06-22 19:35:45.798010 | orchestrator | changed: [testbed-manager] 2025-06-22 19:35:45.798237 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:35:45.798899 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:35:45.800126 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:35:45.801458 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:35:45.802449 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:35:45.803239 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:35:45.803853 | orchestrator | 2025-06-22 19:35:45.804540 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-06-22 19:35:45.805352 | orchestrator | Sunday 22 June 2025 19:35:45 +0000 (0:00:01.303) 0:03:37.008 *********** 2025-06-22 19:35:45.896786 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:35:45.950896 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:35:45.986750 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:35:46.019491 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:35:46.094505 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:35:46.094700 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:35:46.095925 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:35:46.097572 | orchestrator | 2025-06-22 19:35:46.098294 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-06-22 19:35:46.100101 | orchestrator | Sunday 22 June 2025 19:35:46 +0000 (0:00:00.296) 0:03:37.305 *********** 2025-06-22 19:35:46.811614 | orchestrator | ok: [testbed-manager] 2025-06-22 19:35:46.812301 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:35:46.813468 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:35:46.814390 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:35:46.815322 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:35:46.816194 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:35:46.816985 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:35:46.818178 | orchestrator | 2025-06-22 19:35:46.818856 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-06-22 19:35:46.819646 | orchestrator | Sunday 22 June 2025 19:35:46 +0000 (0:00:00.716) 0:03:38.021 *********** 2025-06-22 19:35:47.197228 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:35:47.197447 | orchestrator | 2025-06-22 19:35:47.197705 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-06-22 19:35:47.198126 | orchestrator | Sunday 22 June 2025 19:35:47 +0000 (0:00:00.386) 0:03:38.408 *********** 2025-06-22 19:35:55.857090 | orchestrator | ok: [testbed-manager] 2025-06-22 19:35:55.858900 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:35:55.860188 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:35:55.862351 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:35:55.862432 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:35:55.863025 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:35:55.863902 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:35:55.864689 | orchestrator | 2025-06-22 19:35:55.865441 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-06-22 19:35:55.865976 | orchestrator | Sunday 22 June 2025 19:35:55 +0000 (0:00:08.659) 0:03:47.067 *********** 2025-06-22 19:35:57.182951 | orchestrator | ok: [testbed-manager] 2025-06-22 19:35:57.184257 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:35:57.185217 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:35:57.186483 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:35:57.187860 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:35:57.189042 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:35:57.190000 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:35:57.190891 | orchestrator | 2025-06-22 19:35:57.191667 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-06-22 19:35:57.192427 | orchestrator | Sunday 22 June 2025 19:35:57 +0000 (0:00:01.326) 0:03:48.394 *********** 2025-06-22 19:35:58.241022 | orchestrator | ok: [testbed-manager] 2025-06-22 19:35:58.241124 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:35:58.242108 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:35:58.242752 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:35:58.244123 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:35:58.245054 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:35:58.245891 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:35:58.246781 | orchestrator | 2025-06-22 19:35:58.247654 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-06-22 19:35:58.248416 | orchestrator | Sunday 22 June 2025 19:35:58 +0000 (0:00:01.056) 0:03:49.450 *********** 2025-06-22 19:35:58.717798 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:35:58.718707 | orchestrator | 2025-06-22 19:35:58.719706 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-06-22 19:35:58.723074 | orchestrator | Sunday 22 June 2025 19:35:58 +0000 (0:00:00.478) 0:03:49.929 *********** 2025-06-22 19:36:07.367479 | orchestrator | changed: [testbed-manager] 2025-06-22 19:36:07.368004 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:36:07.369321 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:36:07.370891 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:36:07.372375 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:36:07.373233 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:36:07.373964 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:36:07.374444 | orchestrator | 2025-06-22 19:36:07.375282 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-06-22 19:36:07.375996 | orchestrator | Sunday 22 June 2025 19:36:07 +0000 (0:00:08.648) 0:03:58.578 *********** 2025-06-22 19:36:08.016129 | orchestrator | changed: [testbed-manager] 2025-06-22 19:36:08.016298 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:36:08.017283 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:36:08.018319 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:36:08.019494 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:36:08.020260 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:36:08.021394 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:36:08.022217 | orchestrator | 2025-06-22 19:36:08.023151 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-06-22 19:36:08.024400 | orchestrator | Sunday 22 June 2025 19:36:08 +0000 (0:00:00.648) 0:03:59.227 *********** 2025-06-22 19:36:09.131500 | orchestrator | changed: [testbed-manager] 2025-06-22 19:36:09.132424 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:36:09.133595 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:36:09.134630 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:36:09.135187 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:36:09.135991 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:36:09.136801 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:36:09.137353 | orchestrator | 2025-06-22 19:36:09.138352 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-06-22 19:36:09.138738 | orchestrator | Sunday 22 June 2025 19:36:09 +0000 (0:00:01.115) 0:04:00.342 *********** 2025-06-22 19:36:10.186290 | orchestrator | changed: [testbed-manager] 2025-06-22 19:36:10.186398 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:36:10.187588 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:36:10.188100 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:36:10.188624 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:36:10.189083 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:36:10.189644 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:36:10.190352 | orchestrator | 2025-06-22 19:36:10.190741 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-06-22 19:36:10.191272 | orchestrator | Sunday 22 June 2025 19:36:10 +0000 (0:00:01.054) 0:04:01.397 *********** 2025-06-22 19:36:10.294133 | orchestrator | ok: [testbed-manager] 2025-06-22 19:36:10.326439 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:36:10.359427 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:36:10.393782 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:36:10.454907 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:36:10.457032 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:36:10.457218 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:36:10.458439 | orchestrator | 2025-06-22 19:36:10.459838 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-06-22 19:36:10.461141 | orchestrator | Sunday 22 June 2025 19:36:10 +0000 (0:00:00.268) 0:04:01.666 *********** 2025-06-22 19:36:10.577194 | orchestrator | ok: [testbed-manager] 2025-06-22 19:36:10.617879 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:36:10.658877 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:36:10.693473 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:36:10.760423 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:36:10.760654 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:36:10.762076 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:36:10.763076 | orchestrator | 2025-06-22 19:36:10.764042 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-06-22 19:36:10.765052 | orchestrator | Sunday 22 June 2025 19:36:10 +0000 (0:00:00.304) 0:04:01.971 *********** 2025-06-22 19:36:10.874758 | orchestrator | ok: [testbed-manager] 2025-06-22 19:36:10.907027 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:36:10.940007 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:36:10.976277 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:36:11.059413 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:36:11.059504 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:36:11.059515 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:36:11.061935 | orchestrator | 2025-06-22 19:36:11.061957 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-06-22 19:36:11.062887 | orchestrator | Sunday 22 June 2025 19:36:11 +0000 (0:00:00.298) 0:04:02.269 *********** 2025-06-22 19:36:16.572962 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:36:16.573412 | orchestrator | ok: [testbed-manager] 2025-06-22 19:36:16.574479 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:36:16.575150 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:36:16.575705 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:36:16.576231 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:36:16.576999 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:36:16.577379 | orchestrator | 2025-06-22 19:36:16.578433 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-06-22 19:36:16.578649 | orchestrator | Sunday 22 June 2025 19:36:16 +0000 (0:00:05.515) 0:04:07.784 *********** 2025-06-22 19:36:16.967960 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:36:16.968178 | orchestrator | 2025-06-22 19:36:16.969746 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-06-22 19:36:16.969863 | orchestrator | Sunday 22 June 2025 19:36:16 +0000 (0:00:00.394) 0:04:08.179 *********** 2025-06-22 19:36:17.046717 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-06-22 19:36:17.046913 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-06-22 19:36:17.088995 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-06-22 19:36:17.090766 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:36:17.090797 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-06-22 19:36:17.147512 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-06-22 19:36:17.147944 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:36:17.148728 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-06-22 19:36:17.189936 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-06-22 19:36:17.191396 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:36:17.192170 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-06-22 19:36:17.193800 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-06-22 19:36:17.194846 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-06-22 19:36:17.229516 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:36:17.230000 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-06-22 19:36:17.231403 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-06-22 19:36:17.297974 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:36:17.298386 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:36:17.300395 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-06-22 19:36:17.301110 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-06-22 19:36:17.302658 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:36:17.303361 | orchestrator | 2025-06-22 19:36:17.304310 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-06-22 19:36:17.306840 | orchestrator | Sunday 22 June 2025 19:36:17 +0000 (0:00:00.330) 0:04:08.509 *********** 2025-06-22 19:36:17.716454 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:36:17.717139 | orchestrator | 2025-06-22 19:36:17.720083 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-06-22 19:36:17.720940 | orchestrator | Sunday 22 June 2025 19:36:17 +0000 (0:00:00.417) 0:04:08.926 *********** 2025-06-22 19:36:17.791341 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-06-22 19:36:17.839453 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:36:17.839690 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-06-22 19:36:17.839779 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-06-22 19:36:17.876832 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:36:17.920659 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:36:17.921125 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-06-22 19:36:17.956708 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:36:17.957499 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-06-22 19:36:18.046128 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-06-22 19:36:18.046800 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:36:18.047782 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:36:18.048955 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-06-22 19:36:18.049979 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:36:18.050916 | orchestrator | 2025-06-22 19:36:18.051673 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-06-22 19:36:18.052694 | orchestrator | Sunday 22 June 2025 19:36:18 +0000 (0:00:00.329) 0:04:09.255 *********** 2025-06-22 19:36:18.576651 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:36:18.576907 | orchestrator | 2025-06-22 19:36:18.577640 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-06-22 19:36:18.578291 | orchestrator | Sunday 22 June 2025 19:36:18 +0000 (0:00:00.532) 0:04:09.788 *********** 2025-06-22 19:36:53.515311 | orchestrator | changed: [testbed-manager] 2025-06-22 19:36:53.517049 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:36:53.518748 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:36:53.520016 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:36:53.522343 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:36:53.522373 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:36:53.523020 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:36:53.523885 | orchestrator | 2025-06-22 19:36:53.524362 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-06-22 19:36:53.524995 | orchestrator | Sunday 22 June 2025 19:36:53 +0000 (0:00:34.935) 0:04:44.723 *********** 2025-06-22 19:37:01.585501 | orchestrator | changed: [testbed-manager] 2025-06-22 19:37:01.585696 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:37:01.585717 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:37:01.585799 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:37:01.587644 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:37:01.588130 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:37:01.588714 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:37:01.589204 | orchestrator | 2025-06-22 19:37:01.589636 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-06-22 19:37:01.590337 | orchestrator | Sunday 22 June 2025 19:37:01 +0000 (0:00:08.064) 0:04:52.788 *********** 2025-06-22 19:37:09.036478 | orchestrator | changed: [testbed-manager] 2025-06-22 19:37:09.036710 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:37:09.038512 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:37:09.039813 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:37:09.040224 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:37:09.040726 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:37:09.042967 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:37:09.043390 | orchestrator | 2025-06-22 19:37:09.043976 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-06-22 19:37:09.044939 | orchestrator | Sunday 22 June 2025 19:37:09 +0000 (0:00:07.458) 0:05:00.246 *********** 2025-06-22 19:37:10.710730 | orchestrator | ok: [testbed-manager] 2025-06-22 19:37:10.711812 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:37:10.712754 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:37:10.713983 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:37:10.715402 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:37:10.716679 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:37:10.717060 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:37:10.717935 | orchestrator | 2025-06-22 19:37:10.718889 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-06-22 19:37:10.719391 | orchestrator | Sunday 22 June 2025 19:37:10 +0000 (0:00:01.674) 0:05:01.921 *********** 2025-06-22 19:37:16.622650 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:37:16.623269 | orchestrator | changed: [testbed-manager] 2025-06-22 19:37:16.626211 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:37:16.628276 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:37:16.629366 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:37:16.631060 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:37:16.631111 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:37:16.631377 | orchestrator | 2025-06-22 19:37:16.632316 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-06-22 19:37:16.633011 | orchestrator | Sunday 22 June 2025 19:37:16 +0000 (0:00:05.909) 0:05:07.831 *********** 2025-06-22 19:37:17.089187 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:37:17.090787 | orchestrator | 2025-06-22 19:37:17.092024 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-06-22 19:37:17.093301 | orchestrator | Sunday 22 June 2025 19:37:17 +0000 (0:00:00.468) 0:05:08.300 *********** 2025-06-22 19:37:17.879655 | orchestrator | changed: [testbed-manager] 2025-06-22 19:37:17.883256 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:37:17.883331 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:37:17.883347 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:37:17.883409 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:37:17.885373 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:37:17.885652 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:37:17.886278 | orchestrator | 2025-06-22 19:37:17.887289 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-06-22 19:37:17.887917 | orchestrator | Sunday 22 June 2025 19:37:17 +0000 (0:00:00.789) 0:05:09.089 *********** 2025-06-22 19:37:19.494950 | orchestrator | ok: [testbed-manager] 2025-06-22 19:37:19.495336 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:37:19.496187 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:37:19.497908 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:37:19.498282 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:37:19.499227 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:37:19.500538 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:37:19.501774 | orchestrator | 2025-06-22 19:37:19.502435 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-06-22 19:37:19.503340 | orchestrator | Sunday 22 June 2025 19:37:19 +0000 (0:00:01.613) 0:05:10.703 *********** 2025-06-22 19:37:20.290407 | orchestrator | changed: [testbed-manager] 2025-06-22 19:37:20.292466 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:37:20.292509 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:37:20.292523 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:37:20.293629 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:37:20.294172 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:37:20.294808 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:37:20.296172 | orchestrator | 2025-06-22 19:37:20.296532 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-06-22 19:37:20.297268 | orchestrator | Sunday 22 June 2025 19:37:20 +0000 (0:00:00.798) 0:05:11.501 *********** 2025-06-22 19:37:20.390184 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:37:20.436471 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:37:20.469160 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:37:20.501841 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:37:20.574073 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:37:20.574411 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:37:20.575307 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:37:20.576803 | orchestrator | 2025-06-22 19:37:20.577491 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-06-22 19:37:20.578137 | orchestrator | Sunday 22 June 2025 19:37:20 +0000 (0:00:00.283) 0:05:11.785 *********** 2025-06-22 19:37:20.649684 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:37:20.682745 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:37:20.713918 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:37:20.745915 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:37:20.778117 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:37:20.947670 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:37:20.949057 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:37:20.949975 | orchestrator | 2025-06-22 19:37:20.950911 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-06-22 19:37:20.951531 | orchestrator | Sunday 22 June 2025 19:37:20 +0000 (0:00:00.374) 0:05:12.159 *********** 2025-06-22 19:37:21.053987 | orchestrator | ok: [testbed-manager] 2025-06-22 19:37:21.089308 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:37:21.121049 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:37:21.156008 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:37:21.229177 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:37:21.229921 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:37:21.231164 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:37:21.232129 | orchestrator | 2025-06-22 19:37:21.233100 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-06-22 19:37:21.234425 | orchestrator | Sunday 22 June 2025 19:37:21 +0000 (0:00:00.280) 0:05:12.440 *********** 2025-06-22 19:37:21.335757 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:37:21.373734 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:37:21.407352 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:37:21.444083 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:37:21.518527 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:37:21.519265 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:37:21.520625 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:37:21.521378 | orchestrator | 2025-06-22 19:37:21.521997 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-06-22 19:37:21.522831 | orchestrator | Sunday 22 June 2025 19:37:21 +0000 (0:00:00.290) 0:05:12.730 *********** 2025-06-22 19:37:21.616305 | orchestrator | ok: [testbed-manager] 2025-06-22 19:37:21.655328 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:37:21.706318 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:37:21.738476 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:37:21.815412 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:37:21.815862 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:37:21.816694 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:37:21.820010 | orchestrator | 2025-06-22 19:37:21.821310 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2025-06-22 19:37:21.822450 | orchestrator | Sunday 22 June 2025 19:37:21 +0000 (0:00:00.296) 0:05:13.027 *********** 2025-06-22 19:37:21.921281 | orchestrator | ok: [testbed-manager] =>  2025-06-22 19:37:21.921474 | orchestrator |  docker_version: 5:27.5.1 2025-06-22 19:37:21.958836 | orchestrator | ok: [testbed-node-3] =>  2025-06-22 19:37:21.960668 | orchestrator |  docker_version: 5:27.5.1 2025-06-22 19:37:21.995319 | orchestrator | ok: [testbed-node-4] =>  2025-06-22 19:37:21.996115 | orchestrator |  docker_version: 5:27.5.1 2025-06-22 19:37:22.025436 | orchestrator | ok: [testbed-node-5] =>  2025-06-22 19:37:22.026310 | orchestrator |  docker_version: 5:27.5.1 2025-06-22 19:37:22.113450 | orchestrator | ok: [testbed-node-0] =>  2025-06-22 19:37:22.114140 | orchestrator |  docker_version: 5:27.5.1 2025-06-22 19:37:22.117466 | orchestrator | ok: [testbed-node-1] =>  2025-06-22 19:37:22.117507 | orchestrator |  docker_version: 5:27.5.1 2025-06-22 19:37:22.118252 | orchestrator | ok: [testbed-node-2] =>  2025-06-22 19:37:22.118989 | orchestrator |  docker_version: 5:27.5.1 2025-06-22 19:37:22.119960 | orchestrator | 2025-06-22 19:37:22.120781 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2025-06-22 19:37:22.121756 | orchestrator | Sunday 22 June 2025 19:37:22 +0000 (0:00:00.298) 0:05:13.325 *********** 2025-06-22 19:37:22.234756 | orchestrator | ok: [testbed-manager] =>  2025-06-22 19:37:22.235358 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-22 19:37:22.383065 | orchestrator | ok: [testbed-node-3] =>  2025-06-22 19:37:22.384146 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-22 19:37:22.419201 | orchestrator | ok: [testbed-node-4] =>  2025-06-22 19:37:22.421728 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-22 19:37:22.456576 | orchestrator | ok: [testbed-node-5] =>  2025-06-22 19:37:22.456711 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-22 19:37:22.527382 | orchestrator | ok: [testbed-node-0] =>  2025-06-22 19:37:22.528495 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-22 19:37:22.529447 | orchestrator | ok: [testbed-node-1] =>  2025-06-22 19:37:22.530831 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-22 19:37:22.532708 | orchestrator | ok: [testbed-node-2] =>  2025-06-22 19:37:22.533244 | orchestrator |  docker_cli_version: 5:27.5.1 2025-06-22 19:37:22.534288 | orchestrator | 2025-06-22 19:37:22.535672 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-06-22 19:37:22.537385 | orchestrator | Sunday 22 June 2025 19:37:22 +0000 (0:00:00.413) 0:05:13.739 *********** 2025-06-22 19:37:22.599287 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:37:22.631300 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:37:22.698531 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:37:22.742705 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:37:22.809851 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:37:22.810921 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:37:22.813055 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:37:22.814068 | orchestrator | 2025-06-22 19:37:22.815130 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-06-22 19:37:22.815695 | orchestrator | Sunday 22 June 2025 19:37:22 +0000 (0:00:00.283) 0:05:14.022 *********** 2025-06-22 19:37:22.888873 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:37:22.923510 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:37:22.956847 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:37:23.023060 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:37:23.073432 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:37:23.074128 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:37:23.075405 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:37:23.076453 | orchestrator | 2025-06-22 19:37:23.077430 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-06-22 19:37:23.078386 | orchestrator | Sunday 22 June 2025 19:37:23 +0000 (0:00:00.263) 0:05:14.285 *********** 2025-06-22 19:37:23.452448 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:37:23.452869 | orchestrator | 2025-06-22 19:37:23.454232 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-06-22 19:37:23.454940 | orchestrator | Sunday 22 June 2025 19:37:23 +0000 (0:00:00.377) 0:05:14.663 *********** 2025-06-22 19:37:24.294521 | orchestrator | ok: [testbed-manager] 2025-06-22 19:37:24.295853 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:37:24.296881 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:37:24.298447 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:37:24.298711 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:37:24.299648 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:37:24.300683 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:37:24.301414 | orchestrator | 2025-06-22 19:37:24.301992 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-06-22 19:37:24.302636 | orchestrator | Sunday 22 June 2025 19:37:24 +0000 (0:00:00.840) 0:05:15.504 *********** 2025-06-22 19:37:27.176989 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:37:27.177887 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:37:27.179907 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:37:27.181147 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:37:27.181943 | orchestrator | ok: [testbed-manager] 2025-06-22 19:37:27.182932 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:37:27.183840 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:37:27.184164 | orchestrator | 2025-06-22 19:37:27.185664 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-06-22 19:37:27.186420 | orchestrator | Sunday 22 June 2025 19:37:27 +0000 (0:00:02.883) 0:05:18.387 *********** 2025-06-22 19:37:27.248083 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-06-22 19:37:27.336928 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-06-22 19:37:27.337435 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-06-22 19:37:27.338306 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-06-22 19:37:27.338862 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-06-22 19:37:27.339684 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-06-22 19:37:27.408413 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:37:27.408876 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-06-22 19:37:27.409141 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-06-22 19:37:27.410065 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-06-22 19:37:27.617112 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:37:27.617287 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-06-22 19:37:27.618282 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-06-22 19:37:27.620070 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-06-22 19:37:27.703168 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:37:27.704948 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-06-22 19:37:27.707229 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-06-22 19:37:27.707974 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-06-22 19:37:27.777118 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:37:27.777212 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-06-22 19:37:27.777226 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-06-22 19:37:27.919459 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:37:27.919601 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-06-22 19:37:27.922912 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:37:27.922980 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-06-22 19:37:27.924702 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-06-22 19:37:27.926185 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-06-22 19:37:27.927810 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:37:27.928116 | orchestrator | 2025-06-22 19:37:27.930006 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-06-22 19:37:27.930084 | orchestrator | Sunday 22 June 2025 19:37:27 +0000 (0:00:00.740) 0:05:19.127 *********** 2025-06-22 19:37:34.278282 | orchestrator | ok: [testbed-manager] 2025-06-22 19:37:34.278642 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:37:34.279379 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:37:34.280090 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:37:34.280478 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:37:34.282476 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:37:34.282790 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:37:34.283834 | orchestrator | 2025-06-22 19:37:34.284416 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-06-22 19:37:34.285289 | orchestrator | Sunday 22 June 2025 19:37:34 +0000 (0:00:06.359) 0:05:25.487 *********** 2025-06-22 19:37:35.409158 | orchestrator | ok: [testbed-manager] 2025-06-22 19:37:35.409258 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:37:35.409292 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:37:35.409304 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:37:35.409375 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:37:35.409841 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:37:35.410365 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:37:35.410904 | orchestrator | 2025-06-22 19:37:35.411312 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-06-22 19:37:35.411874 | orchestrator | Sunday 22 June 2025 19:37:35 +0000 (0:00:01.127) 0:05:26.615 *********** 2025-06-22 19:37:42.872092 | orchestrator | ok: [testbed-manager] 2025-06-22 19:37:42.872291 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:37:42.873356 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:37:42.875041 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:37:42.876719 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:37:42.877594 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:37:42.878401 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:37:42.879403 | orchestrator | 2025-06-22 19:37:42.880166 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-06-22 19:37:42.880859 | orchestrator | Sunday 22 June 2025 19:37:42 +0000 (0:00:07.466) 0:05:34.082 *********** 2025-06-22 19:37:46.034858 | orchestrator | changed: [testbed-manager] 2025-06-22 19:37:46.036013 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:37:46.037318 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:37:46.038784 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:37:46.039516 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:37:46.040603 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:37:46.041078 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:37:46.041813 | orchestrator | 2025-06-22 19:37:46.042636 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-06-22 19:37:46.043642 | orchestrator | Sunday 22 June 2025 19:37:46 +0000 (0:00:03.161) 0:05:37.243 *********** 2025-06-22 19:37:47.558081 | orchestrator | ok: [testbed-manager] 2025-06-22 19:37:47.561365 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:37:47.561437 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:37:47.561914 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:37:47.562802 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:37:47.563631 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:37:47.564250 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:37:47.565089 | orchestrator | 2025-06-22 19:37:47.565818 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-06-22 19:37:47.566257 | orchestrator | Sunday 22 June 2025 19:37:47 +0000 (0:00:01.523) 0:05:38.767 *********** 2025-06-22 19:37:48.858396 | orchestrator | ok: [testbed-manager] 2025-06-22 19:37:48.858506 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:37:48.859043 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:37:48.860065 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:37:48.861234 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:37:48.862337 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:37:48.862474 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:37:48.863516 | orchestrator | 2025-06-22 19:37:48.864714 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-06-22 19:37:48.865584 | orchestrator | Sunday 22 June 2025 19:37:48 +0000 (0:00:01.302) 0:05:40.069 *********** 2025-06-22 19:37:49.067812 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:37:49.138175 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:37:49.206639 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:37:49.275103 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:37:49.466495 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:37:49.466763 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:37:49.466879 | orchestrator | changed: [testbed-manager] 2025-06-22 19:37:49.467610 | orchestrator | 2025-06-22 19:37:49.468308 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-06-22 19:37:49.468810 | orchestrator | Sunday 22 June 2025 19:37:49 +0000 (0:00:00.607) 0:05:40.676 *********** 2025-06-22 19:37:59.748439 | orchestrator | ok: [testbed-manager] 2025-06-22 19:37:59.748657 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:37:59.748974 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:37:59.750098 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:37:59.751149 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:37:59.752406 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:37:59.753280 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:37:59.753867 | orchestrator | 2025-06-22 19:37:59.754652 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-06-22 19:37:59.755715 | orchestrator | Sunday 22 June 2025 19:37:59 +0000 (0:00:10.277) 0:05:50.954 *********** 2025-06-22 19:38:00.648457 | orchestrator | changed: [testbed-manager] 2025-06-22 19:38:00.649880 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:38:00.650774 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:38:00.652261 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:38:00.653368 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:38:00.653801 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:38:00.655816 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:38:00.656258 | orchestrator | 2025-06-22 19:38:00.657213 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-06-22 19:38:00.657617 | orchestrator | Sunday 22 June 2025 19:38:00 +0000 (0:00:00.905) 0:05:51.860 *********** 2025-06-22 19:38:10.140015 | orchestrator | ok: [testbed-manager] 2025-06-22 19:38:10.140140 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:38:10.140240 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:38:10.140985 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:38:10.144482 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:38:10.144780 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:38:10.145143 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:38:10.145400 | orchestrator | 2025-06-22 19:38:10.145777 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-06-22 19:38:10.146937 | orchestrator | Sunday 22 June 2025 19:38:10 +0000 (0:00:09.489) 0:06:01.349 *********** 2025-06-22 19:38:20.960070 | orchestrator | ok: [testbed-manager] 2025-06-22 19:38:20.960193 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:38:20.960211 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:38:20.960223 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:38:20.960300 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:38:20.961952 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:38:20.961975 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:38:20.962168 | orchestrator | 2025-06-22 19:38:20.962536 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-06-22 19:38:20.962979 | orchestrator | Sunday 22 June 2025 19:38:20 +0000 (0:00:10.816) 0:06:12.165 *********** 2025-06-22 19:38:21.314532 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-06-22 19:38:22.155344 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-06-22 19:38:22.155605 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-06-22 19:38:22.157199 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-06-22 19:38:22.157792 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-06-22 19:38:22.158843 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-06-22 19:38:22.159893 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-06-22 19:38:22.161355 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-06-22 19:38:22.162064 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-06-22 19:38:22.163133 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-06-22 19:38:22.163282 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-06-22 19:38:22.164178 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-06-22 19:38:22.164303 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-06-22 19:38:22.164722 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-06-22 19:38:22.165109 | orchestrator | 2025-06-22 19:38:22.165624 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-06-22 19:38:22.166081 | orchestrator | Sunday 22 June 2025 19:38:22 +0000 (0:00:01.198) 0:06:13.364 *********** 2025-06-22 19:38:22.279395 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:38:22.341104 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:38:22.406669 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:38:22.466616 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:38:22.527035 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:38:22.642513 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:38:22.643248 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:38:22.643918 | orchestrator | 2025-06-22 19:38:22.644661 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-06-22 19:38:22.647765 | orchestrator | Sunday 22 June 2025 19:38:22 +0000 (0:00:00.490) 0:06:13.854 *********** 2025-06-22 19:38:26.468157 | orchestrator | ok: [testbed-manager] 2025-06-22 19:38:26.470219 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:38:26.470276 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:38:26.472672 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:38:26.473722 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:38:26.474933 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:38:26.476123 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:38:26.476875 | orchestrator | 2025-06-22 19:38:26.477602 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-06-22 19:38:26.478417 | orchestrator | Sunday 22 June 2025 19:38:26 +0000 (0:00:03.821) 0:06:17.676 *********** 2025-06-22 19:38:26.597457 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:38:26.661447 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:38:26.726632 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:38:26.795062 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:38:26.859721 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:38:26.963065 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:38:26.963678 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:38:26.964865 | orchestrator | 2025-06-22 19:38:26.965209 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-06-22 19:38:26.966196 | orchestrator | Sunday 22 June 2025 19:38:26 +0000 (0:00:00.495) 0:06:18.171 *********** 2025-06-22 19:38:27.052199 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-06-22 19:38:27.052723 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-06-22 19:38:27.121662 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:38:27.121828 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-06-22 19:38:27.122435 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-06-22 19:38:27.190123 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:38:27.190308 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-06-22 19:38:27.190913 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-06-22 19:38:27.266179 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:38:27.269042 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-06-22 19:38:27.269097 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-06-22 19:38:27.329629 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:38:27.330437 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-06-22 19:38:27.333982 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-06-22 19:38:27.398103 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:38:27.399229 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-06-22 19:38:27.399742 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-06-22 19:38:27.518379 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:38:27.518510 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-06-22 19:38:27.520898 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-06-22 19:38:27.521710 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:38:27.522417 | orchestrator | 2025-06-22 19:38:27.523664 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-06-22 19:38:27.524830 | orchestrator | Sunday 22 June 2025 19:38:27 +0000 (0:00:00.556) 0:06:18.728 *********** 2025-06-22 19:38:27.647383 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:38:27.718882 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:38:27.782644 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:38:27.843671 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:38:27.911391 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:38:28.016730 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:38:28.016836 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:38:28.017406 | orchestrator | 2025-06-22 19:38:28.018109 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-06-22 19:38:28.018780 | orchestrator | Sunday 22 June 2025 19:38:28 +0000 (0:00:00.498) 0:06:19.226 *********** 2025-06-22 19:38:28.164522 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:38:28.228835 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:38:28.293035 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:38:28.383681 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:38:28.445967 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:38:28.575228 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:38:28.575679 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:38:28.579588 | orchestrator | 2025-06-22 19:38:28.579865 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-06-22 19:38:28.581030 | orchestrator | Sunday 22 June 2025 19:38:28 +0000 (0:00:00.558) 0:06:19.784 *********** 2025-06-22 19:38:28.716137 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:38:28.780373 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:38:29.025551 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:38:29.096820 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:38:29.157527 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:38:29.269901 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:38:29.270172 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:38:29.271387 | orchestrator | 2025-06-22 19:38:29.272036 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-06-22 19:38:29.272796 | orchestrator | Sunday 22 June 2025 19:38:29 +0000 (0:00:00.695) 0:06:20.480 *********** 2025-06-22 19:38:30.892286 | orchestrator | ok: [testbed-manager] 2025-06-22 19:38:30.893310 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:38:30.896173 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:38:30.896338 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:38:30.898127 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:38:30.898853 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:38:30.899598 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:38:30.900653 | orchestrator | 2025-06-22 19:38:30.901953 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-06-22 19:38:30.903054 | orchestrator | Sunday 22 June 2025 19:38:30 +0000 (0:00:01.620) 0:06:22.101 *********** 2025-06-22 19:38:31.745325 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:38:31.746189 | orchestrator | 2025-06-22 19:38:31.747120 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-06-22 19:38:31.747633 | orchestrator | Sunday 22 June 2025 19:38:31 +0000 (0:00:00.854) 0:06:22.955 *********** 2025-06-22 19:38:32.146360 | orchestrator | ok: [testbed-manager] 2025-06-22 19:38:32.572831 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:38:32.574309 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:38:32.575113 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:38:32.575964 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:38:32.576696 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:38:32.577638 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:38:32.578515 | orchestrator | 2025-06-22 19:38:32.579520 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-06-22 19:38:32.579579 | orchestrator | Sunday 22 June 2025 19:38:32 +0000 (0:00:00.826) 0:06:23.782 *********** 2025-06-22 19:38:32.937771 | orchestrator | ok: [testbed-manager] 2025-06-22 19:38:33.060241 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:38:33.485167 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:38:33.488711 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:38:33.488932 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:38:33.490345 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:38:33.491199 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:38:33.491944 | orchestrator | 2025-06-22 19:38:33.492633 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-06-22 19:38:33.493286 | orchestrator | Sunday 22 June 2025 19:38:33 +0000 (0:00:00.915) 0:06:24.697 *********** 2025-06-22 19:38:34.776687 | orchestrator | ok: [testbed-manager] 2025-06-22 19:38:34.776778 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:38:34.777396 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:38:34.778759 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:38:34.780391 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:38:34.781232 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:38:34.781778 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:38:34.782662 | orchestrator | 2025-06-22 19:38:34.784612 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-06-22 19:38:34.784681 | orchestrator | Sunday 22 June 2025 19:38:34 +0000 (0:00:01.289) 0:06:25.986 *********** 2025-06-22 19:38:34.885913 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:38:36.108012 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:38:36.108966 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:38:36.110639 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:38:36.111293 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:38:36.111813 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:38:36.113005 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:38:36.114594 | orchestrator | 2025-06-22 19:38:36.115357 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-06-22 19:38:36.116036 | orchestrator | Sunday 22 June 2025 19:38:36 +0000 (0:00:01.331) 0:06:27.318 *********** 2025-06-22 19:38:37.326960 | orchestrator | ok: [testbed-manager] 2025-06-22 19:38:37.328301 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:38:37.328522 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:38:37.329897 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:38:37.330508 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:38:37.331434 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:38:37.332669 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:38:37.333457 | orchestrator | 2025-06-22 19:38:37.334001 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-06-22 19:38:37.334705 | orchestrator | Sunday 22 June 2025 19:38:37 +0000 (0:00:01.219) 0:06:28.537 *********** 2025-06-22 19:38:38.849387 | orchestrator | changed: [testbed-manager] 2025-06-22 19:38:38.851208 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:38:38.851664 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:38:38.852687 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:38:38.853791 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:38:38.854835 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:38:38.855965 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:38:38.857123 | orchestrator | 2025-06-22 19:38:38.857918 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-06-22 19:38:38.858894 | orchestrator | Sunday 22 June 2025 19:38:38 +0000 (0:00:01.520) 0:06:30.058 *********** 2025-06-22 19:38:39.705916 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:38:39.707033 | orchestrator | 2025-06-22 19:38:39.708285 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-06-22 19:38:39.709219 | orchestrator | Sunday 22 June 2025 19:38:39 +0000 (0:00:00.854) 0:06:30.913 *********** 2025-06-22 19:38:41.051095 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:38:41.052964 | orchestrator | ok: [testbed-manager] 2025-06-22 19:38:41.054798 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:38:41.055949 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:38:41.057145 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:38:41.058353 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:38:41.058606 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:38:41.059254 | orchestrator | 2025-06-22 19:38:41.059933 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-06-22 19:38:41.060804 | orchestrator | Sunday 22 June 2025 19:38:41 +0000 (0:00:01.348) 0:06:32.261 *********** 2025-06-22 19:38:42.171237 | orchestrator | ok: [testbed-manager] 2025-06-22 19:38:42.171429 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:38:42.172716 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:38:42.173133 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:38:42.174926 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:38:42.176649 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:38:42.177645 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:38:42.178127 | orchestrator | 2025-06-22 19:38:42.179136 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-06-22 19:38:42.179750 | orchestrator | Sunday 22 June 2025 19:38:42 +0000 (0:00:01.119) 0:06:33.380 *********** 2025-06-22 19:38:42.887102 | orchestrator | ok: [testbed-manager] 2025-06-22 19:38:43.582475 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:38:43.582672 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:38:43.583511 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:38:43.583682 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:38:43.584107 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:38:43.584711 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:38:43.585777 | orchestrator | 2025-06-22 19:38:43.586101 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-06-22 19:38:43.586779 | orchestrator | Sunday 22 June 2025 19:38:43 +0000 (0:00:01.412) 0:06:34.793 *********** 2025-06-22 19:38:44.708989 | orchestrator | ok: [testbed-manager] 2025-06-22 19:38:44.709119 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:38:44.709965 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:38:44.711040 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:38:44.712813 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:38:44.714368 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:38:44.715434 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:38:44.716073 | orchestrator | 2025-06-22 19:38:44.717092 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-06-22 19:38:44.717765 | orchestrator | Sunday 22 June 2025 19:38:44 +0000 (0:00:01.124) 0:06:35.918 *********** 2025-06-22 19:38:45.890615 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:38:45.890798 | orchestrator | 2025-06-22 19:38:45.891429 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-22 19:38:45.891982 | orchestrator | Sunday 22 June 2025 19:38:45 +0000 (0:00:00.893) 0:06:36.811 *********** 2025-06-22 19:38:45.893128 | orchestrator | 2025-06-22 19:38:45.893541 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-22 19:38:45.894800 | orchestrator | Sunday 22 June 2025 19:38:45 +0000 (0:00:00.041) 0:06:36.853 *********** 2025-06-22 19:38:45.895251 | orchestrator | 2025-06-22 19:38:45.896251 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-22 19:38:45.897950 | orchestrator | Sunday 22 June 2025 19:38:45 +0000 (0:00:00.046) 0:06:36.899 *********** 2025-06-22 19:38:45.898417 | orchestrator | 2025-06-22 19:38:45.898795 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-22 19:38:45.899170 | orchestrator | Sunday 22 June 2025 19:38:45 +0000 (0:00:00.038) 0:06:36.938 *********** 2025-06-22 19:38:45.899675 | orchestrator | 2025-06-22 19:38:45.900550 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-22 19:38:45.900934 | orchestrator | Sunday 22 June 2025 19:38:45 +0000 (0:00:00.037) 0:06:36.976 *********** 2025-06-22 19:38:45.901178 | orchestrator | 2025-06-22 19:38:45.902901 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-22 19:38:45.903051 | orchestrator | Sunday 22 June 2025 19:38:45 +0000 (0:00:00.044) 0:06:37.021 *********** 2025-06-22 19:38:45.903306 | orchestrator | 2025-06-22 19:38:45.905600 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-06-22 19:38:45.906593 | orchestrator | Sunday 22 June 2025 19:38:45 +0000 (0:00:00.039) 0:06:37.060 *********** 2025-06-22 19:38:45.907198 | orchestrator | 2025-06-22 19:38:45.908312 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-06-22 19:38:45.908957 | orchestrator | Sunday 22 June 2025 19:38:45 +0000 (0:00:00.038) 0:06:37.098 *********** 2025-06-22 19:38:47.221155 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:38:47.221290 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:38:47.222115 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:38:47.222380 | orchestrator | 2025-06-22 19:38:47.223052 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-06-22 19:38:47.223683 | orchestrator | Sunday 22 June 2025 19:38:47 +0000 (0:00:01.332) 0:06:38.431 *********** 2025-06-22 19:38:48.532810 | orchestrator | changed: [testbed-manager] 2025-06-22 19:38:48.534670 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:38:48.537473 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:38:48.538595 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:38:48.540434 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:38:48.541537 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:38:48.542785 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:38:48.543130 | orchestrator | 2025-06-22 19:38:48.544156 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-06-22 19:38:48.545081 | orchestrator | Sunday 22 June 2025 19:38:48 +0000 (0:00:01.311) 0:06:39.743 *********** 2025-06-22 19:38:49.655811 | orchestrator | changed: [testbed-manager] 2025-06-22 19:38:49.656967 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:38:49.659486 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:38:49.659514 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:38:49.661182 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:38:49.662228 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:38:49.663254 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:38:49.664042 | orchestrator | 2025-06-22 19:38:49.664682 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-06-22 19:38:49.665770 | orchestrator | Sunday 22 June 2025 19:38:49 +0000 (0:00:01.121) 0:06:40.864 *********** 2025-06-22 19:38:49.790644 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:38:51.947879 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:38:51.948978 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:38:51.950874 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:38:51.953477 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:38:51.953521 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:38:51.953780 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:38:51.954712 | orchestrator | 2025-06-22 19:38:51.955216 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-06-22 19:38:51.956173 | orchestrator | Sunday 22 June 2025 19:38:51 +0000 (0:00:02.293) 0:06:43.157 *********** 2025-06-22 19:38:52.045601 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:38:52.046476 | orchestrator | 2025-06-22 19:38:52.047761 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-06-22 19:38:52.051315 | orchestrator | Sunday 22 June 2025 19:38:52 +0000 (0:00:00.096) 0:06:43.254 *********** 2025-06-22 19:38:53.152405 | orchestrator | ok: [testbed-manager] 2025-06-22 19:38:53.152470 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:38:53.152865 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:38:53.153689 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:38:53.154441 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:38:53.155590 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:38:53.156418 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:38:53.157204 | orchestrator | 2025-06-22 19:38:53.157668 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-06-22 19:38:53.158315 | orchestrator | Sunday 22 June 2025 19:38:53 +0000 (0:00:01.107) 0:06:44.362 *********** 2025-06-22 19:38:53.399899 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:38:53.457813 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:38:53.581110 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:38:53.636868 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:38:53.740164 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:38:53.740814 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:38:53.741520 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:38:53.742509 | orchestrator | 2025-06-22 19:38:53.743224 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-06-22 19:38:53.744195 | orchestrator | Sunday 22 June 2025 19:38:53 +0000 (0:00:00.591) 0:06:44.953 *********** 2025-06-22 19:38:54.537080 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:38:54.540944 | orchestrator | 2025-06-22 19:38:54.541669 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-06-22 19:38:54.542777 | orchestrator | Sunday 22 June 2025 19:38:54 +0000 (0:00:00.793) 0:06:45.747 *********** 2025-06-22 19:38:55.314825 | orchestrator | ok: [testbed-manager] 2025-06-22 19:38:55.317913 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:38:55.317973 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:38:55.317987 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:38:55.318890 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:38:55.320035 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:38:55.320738 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:38:55.321464 | orchestrator | 2025-06-22 19:38:55.322477 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-06-22 19:38:55.323084 | orchestrator | Sunday 22 June 2025 19:38:55 +0000 (0:00:00.779) 0:06:46.526 *********** 2025-06-22 19:38:57.809945 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-06-22 19:38:57.811437 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-06-22 19:38:57.814167 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-06-22 19:38:57.818617 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-06-22 19:38:57.819704 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-06-22 19:38:57.820149 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-06-22 19:38:57.820457 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-06-22 19:38:57.821603 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-06-22 19:38:57.821764 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-06-22 19:38:57.822135 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-06-22 19:38:57.822790 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-06-22 19:38:57.823014 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-06-22 19:38:57.823471 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-06-22 19:38:57.823867 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-06-22 19:38:57.824494 | orchestrator | 2025-06-22 19:38:57.824804 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-06-22 19:38:57.825344 | orchestrator | Sunday 22 June 2025 19:38:57 +0000 (0:00:02.493) 0:06:49.020 *********** 2025-06-22 19:38:57.924346 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:38:57.979337 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:38:58.036808 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:38:58.093014 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:38:58.147879 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:38:58.236729 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:38:58.237740 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:38:58.238705 | orchestrator | 2025-06-22 19:38:58.239538 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-06-22 19:38:58.240615 | orchestrator | Sunday 22 June 2025 19:38:58 +0000 (0:00:00.428) 0:06:49.449 *********** 2025-06-22 19:38:58.972020 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:38:58.976490 | orchestrator | 2025-06-22 19:38:58.976621 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-06-22 19:38:58.977250 | orchestrator | Sunday 22 June 2025 19:38:58 +0000 (0:00:00.729) 0:06:50.178 *********** 2025-06-22 19:38:59.429365 | orchestrator | ok: [testbed-manager] 2025-06-22 19:38:59.490539 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:38:59.931156 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:38:59.931794 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:38:59.932192 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:38:59.933630 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:38:59.933799 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:38:59.934279 | orchestrator | 2025-06-22 19:38:59.935097 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-06-22 19:38:59.935532 | orchestrator | Sunday 22 June 2025 19:38:59 +0000 (0:00:00.962) 0:06:51.141 *********** 2025-06-22 19:39:00.278899 | orchestrator | ok: [testbed-manager] 2025-06-22 19:39:00.663419 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:39:00.663510 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:39:00.664764 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:39:00.664842 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:39:00.665729 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:39:00.666503 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:39:00.667341 | orchestrator | 2025-06-22 19:39:00.668070 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-06-22 19:39:00.668544 | orchestrator | Sunday 22 June 2025 19:39:00 +0000 (0:00:00.732) 0:06:51.874 *********** 2025-06-22 19:39:00.778630 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:39:00.835666 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:39:00.892993 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:39:00.958818 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:39:01.040887 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:39:01.128892 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:39:01.129836 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:39:01.130228 | orchestrator | 2025-06-22 19:39:01.132257 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-06-22 19:39:01.132282 | orchestrator | Sunday 22 June 2025 19:39:01 +0000 (0:00:00.465) 0:06:52.339 *********** 2025-06-22 19:39:02.502910 | orchestrator | ok: [testbed-manager] 2025-06-22 19:39:02.503670 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:39:02.505417 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:39:02.506464 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:39:02.507473 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:39:02.507960 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:39:02.508911 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:39:02.508970 | orchestrator | 2025-06-22 19:39:02.509280 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-06-22 19:39:02.509884 | orchestrator | Sunday 22 June 2025 19:39:02 +0000 (0:00:01.374) 0:06:53.714 *********** 2025-06-22 19:39:02.614657 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:39:02.671167 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:39:02.724894 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:39:02.778823 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:39:02.834234 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:39:02.905703 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:39:02.905773 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:39:02.906270 | orchestrator | 2025-06-22 19:39:02.906494 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-06-22 19:39:02.907202 | orchestrator | Sunday 22 June 2025 19:39:02 +0000 (0:00:00.404) 0:06:54.118 *********** 2025-06-22 19:39:10.432730 | orchestrator | ok: [testbed-manager] 2025-06-22 19:39:10.433423 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:39:10.435138 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:39:10.435153 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:39:10.436119 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:39:10.436662 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:39:10.438618 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:39:10.439682 | orchestrator | 2025-06-22 19:39:10.440647 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-06-22 19:39:10.441457 | orchestrator | Sunday 22 June 2025 19:39:10 +0000 (0:00:07.523) 0:07:01.642 *********** 2025-06-22 19:39:11.805061 | orchestrator | ok: [testbed-manager] 2025-06-22 19:39:11.807293 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:39:11.808733 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:39:11.809863 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:39:11.810836 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:39:11.811629 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:39:11.812343 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:39:11.812826 | orchestrator | 2025-06-22 19:39:11.813448 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-06-22 19:39:11.814247 | orchestrator | Sunday 22 June 2025 19:39:11 +0000 (0:00:01.373) 0:07:03.016 *********** 2025-06-22 19:39:14.684863 | orchestrator | ok: [testbed-manager] 2025-06-22 19:39:14.685452 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:39:14.686503 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:39:14.687317 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:39:14.688531 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:39:14.689211 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:39:14.690156 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:39:14.691889 | orchestrator | 2025-06-22 19:39:14.692686 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-06-22 19:39:14.693668 | orchestrator | Sunday 22 June 2025 19:39:14 +0000 (0:00:02.877) 0:07:05.893 *********** 2025-06-22 19:39:16.537114 | orchestrator | ok: [testbed-manager] 2025-06-22 19:39:16.538001 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:39:16.538926 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:39:16.540300 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:39:16.541359 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:39:16.541757 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:39:16.543015 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:39:16.543367 | orchestrator | 2025-06-22 19:39:16.544498 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-06-22 19:39:16.544634 | orchestrator | Sunday 22 June 2025 19:39:16 +0000 (0:00:01.852) 0:07:07.746 *********** 2025-06-22 19:39:16.946823 | orchestrator | ok: [testbed-manager] 2025-06-22 19:39:17.387333 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:39:17.388271 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:39:17.389260 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:39:17.390182 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:39:17.391003 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:39:17.391969 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:39:17.392639 | orchestrator | 2025-06-22 19:39:17.393465 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-06-22 19:39:17.394497 | orchestrator | Sunday 22 June 2025 19:39:17 +0000 (0:00:00.852) 0:07:08.598 *********** 2025-06-22 19:39:17.517863 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:39:17.579984 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:39:17.644322 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:39:17.711608 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:39:17.773526 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:39:18.196848 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:39:18.197581 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:39:18.198306 | orchestrator | 2025-06-22 19:39:18.199387 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-06-22 19:39:18.200266 | orchestrator | Sunday 22 June 2025 19:39:18 +0000 (0:00:00.807) 0:07:09.405 *********** 2025-06-22 19:39:18.319959 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:39:18.392062 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:39:18.455110 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:39:18.526381 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:39:18.597190 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:39:18.705103 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:39:18.705743 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:39:18.706832 | orchestrator | 2025-06-22 19:39:18.708044 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-06-22 19:39:18.708787 | orchestrator | Sunday 22 June 2025 19:39:18 +0000 (0:00:00.510) 0:07:09.915 *********** 2025-06-22 19:39:18.838898 | orchestrator | ok: [testbed-manager] 2025-06-22 19:39:18.901639 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:39:18.965124 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:39:19.033846 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:39:19.265671 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:39:19.377721 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:39:19.377874 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:39:19.378839 | orchestrator | 2025-06-22 19:39:19.379719 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-06-22 19:39:19.380991 | orchestrator | Sunday 22 June 2025 19:39:19 +0000 (0:00:00.673) 0:07:10.589 *********** 2025-06-22 19:39:19.516042 | orchestrator | ok: [testbed-manager] 2025-06-22 19:39:19.578700 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:39:19.641957 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:39:19.710437 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:39:19.773173 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:39:19.878008 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:39:19.878235 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:39:19.879067 | orchestrator | 2025-06-22 19:39:19.879756 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-06-22 19:39:19.881075 | orchestrator | Sunday 22 June 2025 19:39:19 +0000 (0:00:00.497) 0:07:11.086 *********** 2025-06-22 19:39:20.012511 | orchestrator | ok: [testbed-manager] 2025-06-22 19:39:20.075960 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:39:20.146717 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:39:20.211738 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:39:20.275422 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:39:20.382805 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:39:20.383801 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:39:20.384769 | orchestrator | 2025-06-22 19:39:20.388093 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-06-22 19:39:20.388142 | orchestrator | Sunday 22 June 2025 19:39:20 +0000 (0:00:00.508) 0:07:11.595 *********** 2025-06-22 19:39:25.873527 | orchestrator | ok: [testbed-manager] 2025-06-22 19:39:25.873800 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:39:25.875616 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:39:25.877288 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:39:25.878923 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:39:25.879067 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:39:25.880679 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:39:25.880773 | orchestrator | 2025-06-22 19:39:25.881813 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-06-22 19:39:25.883228 | orchestrator | Sunday 22 June 2025 19:39:25 +0000 (0:00:05.486) 0:07:17.081 *********** 2025-06-22 19:39:26.010866 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:39:26.071953 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:39:26.227774 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:39:26.289435 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:39:26.413334 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:39:26.414071 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:39:26.415705 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:39:26.417598 | orchestrator | 2025-06-22 19:39:26.418656 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-06-22 19:39:26.419638 | orchestrator | Sunday 22 June 2025 19:39:26 +0000 (0:00:00.542) 0:07:17.623 *********** 2025-06-22 19:39:27.442984 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:39:27.443245 | orchestrator | 2025-06-22 19:39:27.444008 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-06-22 19:39:27.445540 | orchestrator | Sunday 22 June 2025 19:39:27 +0000 (0:00:01.030) 0:07:18.653 *********** 2025-06-22 19:39:29.200385 | orchestrator | ok: [testbed-manager] 2025-06-22 19:39:29.200630 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:39:29.201022 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:39:29.201555 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:39:29.205363 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:39:29.207719 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:39:29.207749 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:39:29.209172 | orchestrator | 2025-06-22 19:39:29.210942 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-06-22 19:39:29.211804 | orchestrator | Sunday 22 June 2025 19:39:29 +0000 (0:00:01.755) 0:07:20.409 *********** 2025-06-22 19:39:30.394269 | orchestrator | ok: [testbed-manager] 2025-06-22 19:39:30.394752 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:39:30.395667 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:39:30.398287 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:39:30.398333 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:39:30.398643 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:39:30.399142 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:39:30.399603 | orchestrator | 2025-06-22 19:39:30.399962 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-06-22 19:39:30.400206 | orchestrator | Sunday 22 June 2025 19:39:30 +0000 (0:00:01.195) 0:07:21.605 *********** 2025-06-22 19:39:31.452172 | orchestrator | ok: [testbed-manager] 2025-06-22 19:39:31.452403 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:39:31.453442 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:39:31.454194 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:39:31.455204 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:39:31.456015 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:39:31.456898 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:39:31.457811 | orchestrator | 2025-06-22 19:39:31.458609 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-06-22 19:39:31.459160 | orchestrator | Sunday 22 June 2025 19:39:31 +0000 (0:00:01.054) 0:07:22.660 *********** 2025-06-22 19:39:33.238823 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-22 19:39:33.239290 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-22 19:39:33.242913 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-22 19:39:33.242994 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-22 19:39:33.243007 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-22 19:39:33.243018 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-22 19:39:33.244109 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-06-22 19:39:33.245005 | orchestrator | 2025-06-22 19:39:33.245310 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-06-22 19:39:33.246217 | orchestrator | Sunday 22 June 2025 19:39:33 +0000 (0:00:01.787) 0:07:24.447 *********** 2025-06-22 19:39:34.000902 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:39:34.001827 | orchestrator | 2025-06-22 19:39:34.002599 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-06-22 19:39:34.003731 | orchestrator | Sunday 22 June 2025 19:39:33 +0000 (0:00:00.761) 0:07:25.209 *********** 2025-06-22 19:39:43.196969 | orchestrator | changed: [testbed-manager] 2025-06-22 19:39:43.199190 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:39:43.200815 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:39:43.202166 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:39:43.203122 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:39:43.204331 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:39:43.205824 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:39:43.207369 | orchestrator | 2025-06-22 19:39:43.208545 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-06-22 19:39:43.209761 | orchestrator | Sunday 22 June 2025 19:39:43 +0000 (0:00:09.195) 0:07:34.404 *********** 2025-06-22 19:39:45.009006 | orchestrator | ok: [testbed-manager] 2025-06-22 19:39:45.009122 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:39:45.012033 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:39:45.012064 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:39:45.012076 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:39:45.012406 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:39:45.013072 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:39:45.013764 | orchestrator | 2025-06-22 19:39:45.014232 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-06-22 19:39:45.014758 | orchestrator | Sunday 22 June 2025 19:39:44 +0000 (0:00:01.811) 0:07:36.216 *********** 2025-06-22 19:39:46.311887 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:39:46.312449 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:39:46.313459 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:39:46.315418 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:39:46.315442 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:39:46.316074 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:39:46.316600 | orchestrator | 2025-06-22 19:39:46.318992 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-06-22 19:39:46.319716 | orchestrator | Sunday 22 June 2025 19:39:46 +0000 (0:00:01.304) 0:07:37.520 *********** 2025-06-22 19:39:47.776996 | orchestrator | changed: [testbed-manager] 2025-06-22 19:39:47.777545 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:39:47.779892 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:39:47.780191 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:39:47.783159 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:39:47.784141 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:39:47.785284 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:39:47.786413 | orchestrator | 2025-06-22 19:39:47.787322 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-06-22 19:39:47.788538 | orchestrator | 2025-06-22 19:39:47.789631 | orchestrator | TASK [Include hardening role] ************************************************** 2025-06-22 19:39:47.790256 | orchestrator | Sunday 22 June 2025 19:39:47 +0000 (0:00:01.465) 0:07:38.986 *********** 2025-06-22 19:39:47.901948 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:39:47.964908 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:39:48.031940 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:39:48.091476 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:39:48.152285 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:39:48.313132 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:39:48.314755 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:39:48.315711 | orchestrator | 2025-06-22 19:39:48.317052 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-06-22 19:39:48.318003 | orchestrator | 2025-06-22 19:39:48.318806 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-06-22 19:39:48.319855 | orchestrator | Sunday 22 June 2025 19:39:48 +0000 (0:00:00.534) 0:07:39.521 *********** 2025-06-22 19:39:49.626066 | orchestrator | changed: [testbed-manager] 2025-06-22 19:39:49.626236 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:39:49.627383 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:39:49.628913 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:39:49.630641 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:39:49.632313 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:39:49.633656 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:39:49.634767 | orchestrator | 2025-06-22 19:39:49.635913 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-06-22 19:39:49.636855 | orchestrator | Sunday 22 June 2025 19:39:49 +0000 (0:00:01.312) 0:07:40.834 *********** 2025-06-22 19:39:51.206114 | orchestrator | ok: [testbed-manager] 2025-06-22 19:39:51.209914 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:39:51.209952 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:39:51.209964 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:39:51.209975 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:39:51.211340 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:39:51.212005 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:39:51.213256 | orchestrator | 2025-06-22 19:39:51.214721 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-06-22 19:39:51.215003 | orchestrator | Sunday 22 June 2025 19:39:51 +0000 (0:00:01.581) 0:07:42.415 *********** 2025-06-22 19:39:51.330170 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:39:51.398723 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:39:51.464151 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:39:51.534259 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:39:51.602862 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:39:51.994923 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:39:51.995246 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:39:51.998875 | orchestrator | 2025-06-22 19:39:51.998901 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-06-22 19:39:51.998915 | orchestrator | Sunday 22 June 2025 19:39:51 +0000 (0:00:00.787) 0:07:43.203 *********** 2025-06-22 19:39:53.307797 | orchestrator | changed: [testbed-manager] 2025-06-22 19:39:53.307969 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:39:53.312593 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:39:53.313218 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:39:53.313476 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:39:53.314711 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:39:53.315239 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:39:53.318360 | orchestrator | 2025-06-22 19:39:53.318435 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-06-22 19:39:53.319185 | orchestrator | 2025-06-22 19:39:53.322148 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-06-22 19:39:53.324918 | orchestrator | Sunday 22 June 2025 19:39:53 +0000 (0:00:01.314) 0:07:44.517 *********** 2025-06-22 19:39:54.283236 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:39:54.283856 | orchestrator | 2025-06-22 19:39:54.287582 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-06-22 19:39:54.287625 | orchestrator | Sunday 22 June 2025 19:39:54 +0000 (0:00:00.975) 0:07:45.493 *********** 2025-06-22 19:39:54.684544 | orchestrator | ok: [testbed-manager] 2025-06-22 19:39:55.115475 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:39:55.116401 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:39:55.117719 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:39:55.118651 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:39:55.119034 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:39:55.120197 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:39:55.120691 | orchestrator | 2025-06-22 19:39:55.121620 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-06-22 19:39:55.122486 | orchestrator | Sunday 22 June 2025 19:39:55 +0000 (0:00:00.833) 0:07:46.326 *********** 2025-06-22 19:39:56.278157 | orchestrator | changed: [testbed-manager] 2025-06-22 19:39:56.279034 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:39:56.281809 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:39:56.281842 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:39:56.282544 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:39:56.283731 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:39:56.284832 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:39:56.286142 | orchestrator | 2025-06-22 19:39:56.287448 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-06-22 19:39:56.288359 | orchestrator | Sunday 22 June 2025 19:39:56 +0000 (0:00:01.160) 0:07:47.486 *********** 2025-06-22 19:39:57.279438 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:39:57.280066 | orchestrator | 2025-06-22 19:39:57.281423 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-06-22 19:39:57.284497 | orchestrator | Sunday 22 June 2025 19:39:57 +0000 (0:00:01.001) 0:07:48.488 *********** 2025-06-22 19:39:58.137531 | orchestrator | ok: [testbed-manager] 2025-06-22 19:39:58.138792 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:39:58.139501 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:39:58.141175 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:39:58.142257 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:39:58.143912 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:39:58.144725 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:39:58.145722 | orchestrator | 2025-06-22 19:39:58.146264 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-06-22 19:39:58.147505 | orchestrator | Sunday 22 June 2025 19:39:58 +0000 (0:00:00.858) 0:07:49.346 *********** 2025-06-22 19:39:58.577052 | orchestrator | changed: [testbed-manager] 2025-06-22 19:39:59.266951 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:39:59.267332 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:39:59.269298 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:39:59.270046 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:39:59.271478 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:39:59.272950 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:39:59.274062 | orchestrator | 2025-06-22 19:39:59.274956 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:39:59.275697 | orchestrator | 2025-06-22 19:39:59 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-22 19:39:59.275779 | orchestrator | 2025-06-22 19:39:59 | INFO  | Please wait and do not abort execution. 2025-06-22 19:39:59.277134 | orchestrator | testbed-manager : ok=162  changed=38  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-06-22 19:39:59.277908 | orchestrator | testbed-node-0 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-06-22 19:39:59.278747 | orchestrator | testbed-node-1 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-06-22 19:39:59.279747 | orchestrator | testbed-node-2 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-06-22 19:39:59.280634 | orchestrator | testbed-node-3 : ok=169  changed=63  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-06-22 19:39:59.281045 | orchestrator | testbed-node-4 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-06-22 19:39:59.282144 | orchestrator | testbed-node-5 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-06-22 19:39:59.283221 | orchestrator | 2025-06-22 19:39:59.283590 | orchestrator | 2025-06-22 19:39:59.284706 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:39:59.285478 | orchestrator | Sunday 22 June 2025 19:39:59 +0000 (0:00:01.131) 0:07:50.478 *********** 2025-06-22 19:39:59.286122 | orchestrator | =============================================================================== 2025-06-22 19:39:59.286856 | orchestrator | osism.commons.packages : Install required packages --------------------- 76.79s 2025-06-22 19:39:59.287592 | orchestrator | osism.commons.packages : Download required packages -------------------- 35.45s 2025-06-22 19:39:59.288611 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 34.94s 2025-06-22 19:39:59.289476 | orchestrator | osism.commons.repository : Update package cache ------------------------ 13.79s 2025-06-22 19:39:59.290706 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 11.90s 2025-06-22 19:39:59.291127 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 11.81s 2025-06-22 19:39:59.292024 | orchestrator | osism.services.docker : Install docker package ------------------------- 10.82s 2025-06-22 19:39:59.293049 | orchestrator | osism.services.docker : Install containerd package --------------------- 10.28s 2025-06-22 19:39:59.294366 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 9.49s 2025-06-22 19:39:59.294444 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.20s 2025-06-22 19:39:59.295448 | orchestrator | osism.services.rng : Install rng package -------------------------------- 8.66s 2025-06-22 19:39:59.296168 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.65s 2025-06-22 19:39:59.297385 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.06s 2025-06-22 19:39:59.299885 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.52s 2025-06-22 19:39:59.300928 | orchestrator | osism.services.docker : Add repository ---------------------------------- 7.47s 2025-06-22 19:39:59.301712 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.46s 2025-06-22 19:39:59.302698 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.36s 2025-06-22 19:39:59.303879 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 5.91s 2025-06-22 19:39:59.304767 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.52s 2025-06-22 19:39:59.305228 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.49s 2025-06-22 19:40:00.017749 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-06-22 19:40:00.017856 | orchestrator | + osism apply network 2025-06-22 19:40:02.240648 | orchestrator | Registering Redlock._acquired_script 2025-06-22 19:40:02.240754 | orchestrator | Registering Redlock._extend_script 2025-06-22 19:40:02.240768 | orchestrator | Registering Redlock._release_script 2025-06-22 19:40:02.304058 | orchestrator | 2025-06-22 19:40:02 | INFO  | Task 68d8c293-7bdd-4dc2-bfcd-03eeb0773e2f (network) was prepared for execution. 2025-06-22 19:40:02.304187 | orchestrator | 2025-06-22 19:40:02 | INFO  | It takes a moment until task 68d8c293-7bdd-4dc2-bfcd-03eeb0773e2f (network) has been started and output is visible here. 2025-06-22 19:40:06.606546 | orchestrator | 2025-06-22 19:40:06.606982 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-06-22 19:40:06.611126 | orchestrator | 2025-06-22 19:40:06.611800 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-06-22 19:40:06.612636 | orchestrator | Sunday 22 June 2025 19:40:06 +0000 (0:00:00.269) 0:00:00.269 *********** 2025-06-22 19:40:06.757709 | orchestrator | ok: [testbed-manager] 2025-06-22 19:40:06.834096 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:40:06.910138 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:40:06.983887 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:40:07.181620 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:40:07.312465 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:40:07.312682 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:40:07.313826 | orchestrator | 2025-06-22 19:40:07.314435 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-06-22 19:40:07.317321 | orchestrator | Sunday 22 June 2025 19:40:07 +0000 (0:00:00.705) 0:00:00.975 *********** 2025-06-22 19:40:08.509147 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 19:40:08.509905 | orchestrator | 2025-06-22 19:40:08.512179 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-06-22 19:40:08.512215 | orchestrator | Sunday 22 June 2025 19:40:08 +0000 (0:00:01.195) 0:00:02.170 *********** 2025-06-22 19:40:10.545531 | orchestrator | ok: [testbed-manager] 2025-06-22 19:40:10.546207 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:40:10.547024 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:40:10.547881 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:40:10.548498 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:40:10.549613 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:40:10.551861 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:40:10.552255 | orchestrator | 2025-06-22 19:40:10.552810 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-06-22 19:40:10.554518 | orchestrator | Sunday 22 June 2025 19:40:10 +0000 (0:00:02.039) 0:00:04.209 *********** 2025-06-22 19:40:12.363820 | orchestrator | ok: [testbed-manager] 2025-06-22 19:40:12.364698 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:40:12.368754 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:40:12.368797 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:40:12.368809 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:40:12.369658 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:40:12.371074 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:40:12.372023 | orchestrator | 2025-06-22 19:40:12.373364 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-06-22 19:40:12.374120 | orchestrator | Sunday 22 June 2025 19:40:12 +0000 (0:00:01.816) 0:00:06.026 *********** 2025-06-22 19:40:12.975090 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-06-22 19:40:12.975772 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-06-22 19:40:12.977928 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-06-22 19:40:13.431929 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-06-22 19:40:13.433835 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-06-22 19:40:13.434655 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-06-22 19:40:13.436191 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-06-22 19:40:13.437555 | orchestrator | 2025-06-22 19:40:13.438466 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-06-22 19:40:13.439209 | orchestrator | Sunday 22 June 2025 19:40:13 +0000 (0:00:01.070) 0:00:07.096 *********** 2025-06-22 19:40:16.945423 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-22 19:40:16.948129 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-06-22 19:40:16.949874 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-06-22 19:40:16.951183 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-06-22 19:40:16.952973 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-22 19:40:16.953484 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-06-22 19:40:16.954868 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-22 19:40:16.956006 | orchestrator | 2025-06-22 19:40:16.956763 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-06-22 19:40:16.957743 | orchestrator | Sunday 22 June 2025 19:40:16 +0000 (0:00:03.508) 0:00:10.605 *********** 2025-06-22 19:40:18.418973 | orchestrator | changed: [testbed-manager] 2025-06-22 19:40:18.420097 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:40:18.421625 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:40:18.423396 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:40:18.424307 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:40:18.425043 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:40:18.426011 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:40:18.427842 | orchestrator | 2025-06-22 19:40:18.427866 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-06-22 19:40:18.428477 | orchestrator | Sunday 22 June 2025 19:40:18 +0000 (0:00:01.477) 0:00:12.082 *********** 2025-06-22 19:40:20.241143 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-22 19:40:20.242141 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-06-22 19:40:20.243848 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-22 19:40:20.245541 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-06-22 19:40:20.246339 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-06-22 19:40:20.247770 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-22 19:40:20.248794 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-06-22 19:40:20.249505 | orchestrator | 2025-06-22 19:40:20.250289 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-06-22 19:40:20.251000 | orchestrator | Sunday 22 June 2025 19:40:20 +0000 (0:00:01.822) 0:00:13.905 *********** 2025-06-22 19:40:20.659489 | orchestrator | ok: [testbed-manager] 2025-06-22 19:40:20.935671 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:40:21.355539 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:40:21.356489 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:40:21.357551 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:40:21.358830 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:40:21.360061 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:40:21.361194 | orchestrator | 2025-06-22 19:40:21.361749 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-06-22 19:40:21.362536 | orchestrator | Sunday 22 June 2025 19:40:21 +0000 (0:00:01.111) 0:00:15.016 *********** 2025-06-22 19:40:21.516688 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:40:21.599916 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:40:21.681744 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:40:21.762012 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:40:21.845045 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:40:21.983178 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:40:21.983640 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:40:21.984606 | orchestrator | 2025-06-22 19:40:21.985868 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-06-22 19:40:21.987066 | orchestrator | Sunday 22 June 2025 19:40:21 +0000 (0:00:00.632) 0:00:15.649 *********** 2025-06-22 19:40:24.172743 | orchestrator | ok: [testbed-manager] 2025-06-22 19:40:24.174610 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:40:24.176745 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:40:24.177508 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:40:24.178960 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:40:24.179852 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:40:24.180492 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:40:24.181537 | orchestrator | 2025-06-22 19:40:24.181970 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-06-22 19:40:24.183368 | orchestrator | Sunday 22 June 2025 19:40:24 +0000 (0:00:02.182) 0:00:17.831 *********** 2025-06-22 19:40:24.459074 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:40:24.542014 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:40:24.627523 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:40:24.707854 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:40:25.104164 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:40:25.108731 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:40:25.108763 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-06-22 19:40:25.108777 | orchestrator | 2025-06-22 19:40:25.110262 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-06-22 19:40:25.110737 | orchestrator | Sunday 22 June 2025 19:40:25 +0000 (0:00:00.937) 0:00:18.769 *********** 2025-06-22 19:40:26.800182 | orchestrator | ok: [testbed-manager] 2025-06-22 19:40:26.800838 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:40:26.802839 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:40:26.803416 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:40:26.806457 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:40:26.808455 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:40:26.809695 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:40:26.811452 | orchestrator | 2025-06-22 19:40:26.812906 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-06-22 19:40:26.815032 | orchestrator | Sunday 22 June 2025 19:40:26 +0000 (0:00:01.690) 0:00:20.459 *********** 2025-06-22 19:40:28.033492 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 19:40:28.035362 | orchestrator | 2025-06-22 19:40:28.037339 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-06-22 19:40:28.038635 | orchestrator | Sunday 22 June 2025 19:40:28 +0000 (0:00:01.235) 0:00:21.695 *********** 2025-06-22 19:40:28.563328 | orchestrator | ok: [testbed-manager] 2025-06-22 19:40:29.138087 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:40:29.139766 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:40:29.140429 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:40:29.143737 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:40:29.145135 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:40:29.147100 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:40:29.148750 | orchestrator | 2025-06-22 19:40:29.151882 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-06-22 19:40:29.158068 | orchestrator | Sunday 22 June 2025 19:40:29 +0000 (0:00:01.103) 0:00:22.799 *********** 2025-06-22 19:40:29.318826 | orchestrator | ok: [testbed-manager] 2025-06-22 19:40:29.405302 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:40:29.489904 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:40:29.572619 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:40:29.657067 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:40:29.795700 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:40:29.800535 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:40:29.800613 | orchestrator | 2025-06-22 19:40:29.801286 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-06-22 19:40:29.801850 | orchestrator | Sunday 22 June 2025 19:40:29 +0000 (0:00:00.660) 0:00:23.459 *********** 2025-06-22 19:40:30.119465 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-22 19:40:30.119647 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-06-22 19:40:30.430921 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-22 19:40:30.431605 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-06-22 19:40:30.432693 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-22 19:40:30.433372 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-06-22 19:40:30.433810 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-22 19:40:30.435101 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-06-22 19:40:30.898549 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-22 19:40:30.899287 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-06-22 19:40:30.900348 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-22 19:40:30.901068 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-06-22 19:40:30.901702 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-06-22 19:40:30.902382 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-06-22 19:40:30.902946 | orchestrator | 2025-06-22 19:40:30.903888 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-06-22 19:40:30.904642 | orchestrator | Sunday 22 June 2025 19:40:30 +0000 (0:00:01.101) 0:00:24.561 *********** 2025-06-22 19:40:31.040728 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:40:31.112041 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:40:31.182773 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:40:31.252547 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:40:31.324014 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:40:31.421395 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:40:31.422184 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:40:31.423365 | orchestrator | 2025-06-22 19:40:31.424277 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2025-06-22 19:40:31.425961 | orchestrator | Sunday 22 June 2025 19:40:31 +0000 (0:00:00.526) 0:00:25.088 *********** 2025-06-22 19:40:35.772902 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-0, testbed-manager, testbed-node-3, testbed-node-1, testbed-node-2, testbed-node-5, testbed-node-4 2025-06-22 19:40:35.773555 | orchestrator | 2025-06-22 19:40:35.773781 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2025-06-22 19:40:35.775843 | orchestrator | Sunday 22 June 2025 19:40:35 +0000 (0:00:04.344) 0:00:29.433 *********** 2025-06-22 19:40:41.830736 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-06-22 19:40:41.831112 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-06-22 19:40:41.833849 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-06-22 19:40:41.834167 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-06-22 19:40:41.836378 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-06-22 19:40:41.837244 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-06-22 19:40:41.838141 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-06-22 19:40:41.839142 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-06-22 19:40:41.840163 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-06-22 19:40:41.840993 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-06-22 19:40:41.841517 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-06-22 19:40:41.842254 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-06-22 19:40:41.842851 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-06-22 19:40:41.843462 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-06-22 19:40:41.844011 | orchestrator | 2025-06-22 19:40:41.844774 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2025-06-22 19:40:41.845237 | orchestrator | Sunday 22 June 2025 19:40:41 +0000 (0:00:06.056) 0:00:35.490 *********** 2025-06-22 19:40:48.029201 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-06-22 19:40:48.033059 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-06-22 19:40:48.033326 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-06-22 19:40:48.034791 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-06-22 19:40:48.035037 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-06-22 19:40:48.036359 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-06-22 19:40:48.036461 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-06-22 19:40:48.036940 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-06-22 19:40:48.037730 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-06-22 19:40:48.038428 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-06-22 19:40:48.038607 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-06-22 19:40:48.039008 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-06-22 19:40:48.039527 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-06-22 19:40:48.040244 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-06-22 19:40:48.040505 | orchestrator | 2025-06-22 19:40:48.040951 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2025-06-22 19:40:48.041209 | orchestrator | Sunday 22 June 2025 19:40:48 +0000 (0:00:06.198) 0:00:41.688 *********** 2025-06-22 19:40:49.238462 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 19:40:49.239220 | orchestrator | 2025-06-22 19:40:49.239478 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-06-22 19:40:49.240471 | orchestrator | Sunday 22 June 2025 19:40:49 +0000 (0:00:01.210) 0:00:42.899 *********** 2025-06-22 19:40:49.679728 | orchestrator | ok: [testbed-manager] 2025-06-22 19:40:49.946475 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:40:50.393149 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:40:50.393958 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:40:50.395365 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:40:50.396211 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:40:50.396887 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:40:50.397733 | orchestrator | 2025-06-22 19:40:50.398449 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-06-22 19:40:50.399265 | orchestrator | Sunday 22 June 2025 19:40:50 +0000 (0:00:01.158) 0:00:44.057 *********** 2025-06-22 19:40:50.483360 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-22 19:40:50.483982 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-22 19:40:50.484006 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-22 19:40:50.588080 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-22 19:40:50.589173 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-22 19:40:50.589749 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-22 19:40:50.590956 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-22 19:40:50.591818 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-22 19:40:50.680335 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:40:50.680639 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-22 19:40:50.681671 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-22 19:40:50.682754 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-22 19:40:50.683254 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-22 19:40:50.774447 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:40:50.775329 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-22 19:40:50.777926 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-22 19:40:50.778073 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-22 19:40:50.778089 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-22 19:40:50.867474 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:40:50.868596 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-22 19:40:50.869289 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-22 19:40:50.870101 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-22 19:40:50.870536 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-22 19:40:51.120858 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:40:51.121670 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-22 19:40:51.122481 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-22 19:40:51.125708 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-22 19:40:51.125730 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-22 19:40:52.402831 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:40:52.403436 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:40:52.405213 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2025-06-22 19:40:52.406996 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-06-22 19:40:52.407510 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2025-06-22 19:40:52.407976 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-06-22 19:40:52.408800 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:40:52.409740 | orchestrator | 2025-06-22 19:40:52.410097 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2025-06-22 19:40:52.410607 | orchestrator | Sunday 22 June 2025 19:40:52 +0000 (0:00:02.007) 0:00:46.065 *********** 2025-06-22 19:40:52.567379 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:40:52.646092 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:40:52.730452 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:40:52.813293 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:40:52.896046 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:40:53.038252 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:40:53.039164 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:40:53.042957 | orchestrator | 2025-06-22 19:40:53.043000 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-06-22 19:40:53.043010 | orchestrator | Sunday 22 June 2025 19:40:53 +0000 (0:00:00.637) 0:00:46.702 *********** 2025-06-22 19:40:53.200074 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:40:53.277330 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:40:53.527958 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:40:53.615881 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:40:53.697925 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:40:53.744923 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:40:53.745047 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:40:53.746395 | orchestrator | 2025-06-22 19:40:53.747708 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:40:53.748206 | orchestrator | 2025-06-22 19:40:53 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-22 19:40:53.748394 | orchestrator | 2025-06-22 19:40:53 | INFO  | Please wait and do not abort execution. 2025-06-22 19:40:53.748907 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-22 19:40:53.749712 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-22 19:40:53.750012 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-22 19:40:53.750361 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-22 19:40:53.751099 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-22 19:40:53.751505 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-22 19:40:53.752019 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-22 19:40:53.752990 | orchestrator | 2025-06-22 19:40:53.753014 | orchestrator | 2025-06-22 19:40:53.753120 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:40:53.753970 | orchestrator | Sunday 22 June 2025 19:40:53 +0000 (0:00:00.707) 0:00:47.410 *********** 2025-06-22 19:40:53.754205 | orchestrator | =============================================================================== 2025-06-22 19:40:53.754587 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 6.20s 2025-06-22 19:40:53.755210 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 6.06s 2025-06-22 19:40:53.755377 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.34s 2025-06-22 19:40:53.756120 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.51s 2025-06-22 19:40:53.756146 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.18s 2025-06-22 19:40:53.756332 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.04s 2025-06-22 19:40:53.756519 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 2.01s 2025-06-22 19:40:53.757000 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.82s 2025-06-22 19:40:53.757202 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.82s 2025-06-22 19:40:53.757380 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.69s 2025-06-22 19:40:53.757716 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.48s 2025-06-22 19:40:53.757919 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.24s 2025-06-22 19:40:53.758478 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.21s 2025-06-22 19:40:53.758721 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.20s 2025-06-22 19:40:53.758995 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.16s 2025-06-22 19:40:53.759227 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.11s 2025-06-22 19:40:53.759465 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.10s 2025-06-22 19:40:53.759792 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.10s 2025-06-22 19:40:53.759936 | orchestrator | osism.commons.network : Create required directories --------------------- 1.07s 2025-06-22 19:40:53.760326 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.94s 2025-06-22 19:40:54.362287 | orchestrator | + osism apply wireguard 2025-06-22 19:40:56.056166 | orchestrator | Registering Redlock._acquired_script 2025-06-22 19:40:56.056267 | orchestrator | Registering Redlock._extend_script 2025-06-22 19:40:56.056282 | orchestrator | Registering Redlock._release_script 2025-06-22 19:40:56.112441 | orchestrator | 2025-06-22 19:40:56 | INFO  | Task c1e867de-ad26-401c-913d-962662c765f6 (wireguard) was prepared for execution. 2025-06-22 19:40:56.112530 | orchestrator | 2025-06-22 19:40:56 | INFO  | It takes a moment until task c1e867de-ad26-401c-913d-962662c765f6 (wireguard) has been started and output is visible here. 2025-06-22 19:41:00.125307 | orchestrator | 2025-06-22 19:41:00.131386 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-06-22 19:41:00.131447 | orchestrator | 2025-06-22 19:41:00.131776 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-06-22 19:41:00.132392 | orchestrator | Sunday 22 June 2025 19:41:00 +0000 (0:00:00.227) 0:00:00.227 *********** 2025-06-22 19:41:01.583157 | orchestrator | ok: [testbed-manager] 2025-06-22 19:41:01.584100 | orchestrator | 2025-06-22 19:41:01.584146 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-06-22 19:41:01.584411 | orchestrator | Sunday 22 June 2025 19:41:01 +0000 (0:00:01.458) 0:00:01.686 *********** 2025-06-22 19:41:07.911123 | orchestrator | changed: [testbed-manager] 2025-06-22 19:41:07.911239 | orchestrator | 2025-06-22 19:41:07.911257 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-06-22 19:41:07.911993 | orchestrator | Sunday 22 June 2025 19:41:07 +0000 (0:00:06.326) 0:00:08.012 *********** 2025-06-22 19:41:08.474851 | orchestrator | changed: [testbed-manager] 2025-06-22 19:41:08.476101 | orchestrator | 2025-06-22 19:41:08.478228 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-06-22 19:41:08.479192 | orchestrator | Sunday 22 June 2025 19:41:08 +0000 (0:00:00.566) 0:00:08.579 *********** 2025-06-22 19:41:08.883904 | orchestrator | changed: [testbed-manager] 2025-06-22 19:41:08.884791 | orchestrator | 2025-06-22 19:41:08.885353 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-06-22 19:41:08.886310 | orchestrator | Sunday 22 June 2025 19:41:08 +0000 (0:00:00.409) 0:00:08.989 *********** 2025-06-22 19:41:09.417120 | orchestrator | ok: [testbed-manager] 2025-06-22 19:41:09.417225 | orchestrator | 2025-06-22 19:41:09.418314 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-06-22 19:41:09.418965 | orchestrator | Sunday 22 June 2025 19:41:09 +0000 (0:00:00.530) 0:00:09.519 *********** 2025-06-22 19:41:09.948655 | orchestrator | ok: [testbed-manager] 2025-06-22 19:41:09.950060 | orchestrator | 2025-06-22 19:41:09.951725 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-06-22 19:41:09.952936 | orchestrator | Sunday 22 June 2025 19:41:09 +0000 (0:00:00.533) 0:00:10.053 *********** 2025-06-22 19:41:10.376051 | orchestrator | ok: [testbed-manager] 2025-06-22 19:41:10.376980 | orchestrator | 2025-06-22 19:41:10.377751 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-06-22 19:41:10.379228 | orchestrator | Sunday 22 June 2025 19:41:10 +0000 (0:00:00.425) 0:00:10.478 *********** 2025-06-22 19:41:11.564944 | orchestrator | changed: [testbed-manager] 2025-06-22 19:41:11.565488 | orchestrator | 2025-06-22 19:41:11.566787 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-06-22 19:41:11.567926 | orchestrator | Sunday 22 June 2025 19:41:11 +0000 (0:00:01.189) 0:00:11.668 *********** 2025-06-22 19:41:12.417162 | orchestrator | changed: [testbed-manager] => (item=None) 2025-06-22 19:41:12.417366 | orchestrator | changed: [testbed-manager] 2025-06-22 19:41:12.418622 | orchestrator | 2025-06-22 19:41:12.419325 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-06-22 19:41:12.420249 | orchestrator | Sunday 22 June 2025 19:41:12 +0000 (0:00:00.853) 0:00:12.521 *********** 2025-06-22 19:41:13.974185 | orchestrator | changed: [testbed-manager] 2025-06-22 19:41:13.975619 | orchestrator | 2025-06-22 19:41:13.976075 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-06-22 19:41:13.976410 | orchestrator | Sunday 22 June 2025 19:41:13 +0000 (0:00:01.556) 0:00:14.078 *********** 2025-06-22 19:41:14.891911 | orchestrator | changed: [testbed-manager] 2025-06-22 19:41:14.892691 | orchestrator | 2025-06-22 19:41:14.894942 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:41:14.896382 | orchestrator | 2025-06-22 19:41:14 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-22 19:41:14.896427 | orchestrator | 2025-06-22 19:41:14 | INFO  | Please wait and do not abort execution. 2025-06-22 19:41:14.898201 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:41:14.898699 | orchestrator | 2025-06-22 19:41:14.899661 | orchestrator | 2025-06-22 19:41:14.901015 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:41:14.902105 | orchestrator | Sunday 22 June 2025 19:41:14 +0000 (0:00:00.917) 0:00:14.995 *********** 2025-06-22 19:41:14.902829 | orchestrator | =============================================================================== 2025-06-22 19:41:14.903518 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.33s 2025-06-22 19:41:14.904294 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.56s 2025-06-22 19:41:14.904985 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.46s 2025-06-22 19:41:14.905433 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.19s 2025-06-22 19:41:14.906008 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.92s 2025-06-22 19:41:14.907036 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.85s 2025-06-22 19:41:14.907433 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.57s 2025-06-22 19:41:14.907974 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.53s 2025-06-22 19:41:14.909064 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.53s 2025-06-22 19:41:14.910127 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.43s 2025-06-22 19:41:14.910565 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.41s 2025-06-22 19:41:15.299999 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-06-22 19:41:15.338679 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-06-22 19:41:15.338787 | orchestrator | Dload Upload Total Spent Left Speed 2025-06-22 19:41:15.414011 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 14 100 14 0 0 185 0 --:--:-- --:--:-- --:--:-- 186 2025-06-22 19:41:15.428992 | orchestrator | + osism apply --environment custom workarounds 2025-06-22 19:41:16.905023 | orchestrator | 2025-06-22 19:41:16 | INFO  | Trying to run play workarounds in environment custom 2025-06-22 19:41:16.908950 | orchestrator | Registering Redlock._acquired_script 2025-06-22 19:41:16.908998 | orchestrator | Registering Redlock._extend_script 2025-06-22 19:41:16.909010 | orchestrator | Registering Redlock._release_script 2025-06-22 19:41:16.958923 | orchestrator | 2025-06-22 19:41:16 | INFO  | Task 78ef7f99-8750-42ca-aa31-ecff990baa8f (workarounds) was prepared for execution. 2025-06-22 19:41:16.959011 | orchestrator | 2025-06-22 19:41:16 | INFO  | It takes a moment until task 78ef7f99-8750-42ca-aa31-ecff990baa8f (workarounds) has been started and output is visible here. 2025-06-22 19:41:20.509517 | orchestrator | 2025-06-22 19:41:20.510113 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 19:41:20.510158 | orchestrator | 2025-06-22 19:41:20.510478 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-06-22 19:41:20.513505 | orchestrator | Sunday 22 June 2025 19:41:20 +0000 (0:00:00.144) 0:00:00.144 *********** 2025-06-22 19:41:20.661042 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-06-22 19:41:20.736909 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-06-22 19:41:20.811070 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-06-22 19:41:20.888206 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-06-22 19:41:21.038608 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-06-22 19:41:21.189789 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-06-22 19:41:21.189972 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-06-22 19:41:21.191258 | orchestrator | 2025-06-22 19:41:21.192568 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-06-22 19:41:21.193251 | orchestrator | 2025-06-22 19:41:21.193941 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-06-22 19:41:21.194740 | orchestrator | Sunday 22 June 2025 19:41:21 +0000 (0:00:00.680) 0:00:00.824 *********** 2025-06-22 19:41:23.407253 | orchestrator | ok: [testbed-manager] 2025-06-22 19:41:23.408142 | orchestrator | 2025-06-22 19:41:23.409195 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-06-22 19:41:23.410772 | orchestrator | 2025-06-22 19:41:23.412745 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-06-22 19:41:23.412980 | orchestrator | Sunday 22 June 2025 19:41:23 +0000 (0:00:02.217) 0:00:03.042 *********** 2025-06-22 19:41:25.202984 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:41:25.206445 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:41:25.206467 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:41:25.206474 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:41:25.207836 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:41:25.208914 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:41:25.209589 | orchestrator | 2025-06-22 19:41:25.211012 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-06-22 19:41:25.211541 | orchestrator | 2025-06-22 19:41:25.212434 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-06-22 19:41:25.213093 | orchestrator | Sunday 22 June 2025 19:41:25 +0000 (0:00:01.792) 0:00:04.835 *********** 2025-06-22 19:41:26.707171 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-22 19:41:26.708215 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-22 19:41:26.710254 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-22 19:41:26.711319 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-22 19:41:26.712706 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-22 19:41:26.713592 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-06-22 19:41:26.714353 | orchestrator | 2025-06-22 19:41:26.715111 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-06-22 19:41:26.715834 | orchestrator | Sunday 22 June 2025 19:41:26 +0000 (0:00:01.499) 0:00:06.335 *********** 2025-06-22 19:41:30.777705 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:41:30.777817 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:41:30.778271 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:41:30.780205 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:41:30.780853 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:41:30.782240 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:41:30.784940 | orchestrator | 2025-06-22 19:41:30.785718 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-06-22 19:41:30.787364 | orchestrator | Sunday 22 June 2025 19:41:30 +0000 (0:00:04.076) 0:00:10.411 *********** 2025-06-22 19:41:30.941285 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:41:31.018802 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:41:31.096663 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:41:31.173327 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:41:31.467141 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:41:31.467298 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:41:31.468020 | orchestrator | 2025-06-22 19:41:31.468871 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-06-22 19:41:31.471230 | orchestrator | 2025-06-22 19:41:31.471365 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-06-22 19:41:31.472080 | orchestrator | Sunday 22 June 2025 19:41:31 +0000 (0:00:00.689) 0:00:11.100 *********** 2025-06-22 19:41:33.101150 | orchestrator | changed: [testbed-manager] 2025-06-22 19:41:33.103972 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:41:33.104021 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:41:33.104999 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:41:33.106480 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:41:33.107022 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:41:33.107855 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:41:33.109361 | orchestrator | 2025-06-22 19:41:33.110141 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-06-22 19:41:33.111237 | orchestrator | Sunday 22 June 2025 19:41:33 +0000 (0:00:01.633) 0:00:12.734 *********** 2025-06-22 19:41:34.720591 | orchestrator | changed: [testbed-manager] 2025-06-22 19:41:34.721369 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:41:34.722256 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:41:34.723275 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:41:34.723893 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:41:34.724812 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:41:34.725413 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:41:34.726807 | orchestrator | 2025-06-22 19:41:34.727093 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-06-22 19:41:34.730948 | orchestrator | Sunday 22 June 2025 19:41:34 +0000 (0:00:01.616) 0:00:14.351 *********** 2025-06-22 19:41:36.259500 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:41:36.260368 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:41:36.262372 | orchestrator | ok: [testbed-manager] 2025-06-22 19:41:36.262621 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:41:36.266000 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:41:36.266503 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:41:36.267108 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:41:36.268644 | orchestrator | 2025-06-22 19:41:36.270120 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-06-22 19:41:36.271417 | orchestrator | Sunday 22 June 2025 19:41:36 +0000 (0:00:01.541) 0:00:15.893 *********** 2025-06-22 19:41:37.975074 | orchestrator | changed: [testbed-manager] 2025-06-22 19:41:37.975258 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:41:37.979431 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:41:37.979908 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:41:37.980843 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:41:37.982014 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:41:37.982807 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:41:37.983649 | orchestrator | 2025-06-22 19:41:37.984860 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-06-22 19:41:37.985897 | orchestrator | Sunday 22 June 2025 19:41:37 +0000 (0:00:01.712) 0:00:17.605 *********** 2025-06-22 19:41:38.135972 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:41:38.236926 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:41:38.315300 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:41:38.393922 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:41:38.468767 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:41:38.603146 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:41:38.603351 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:41:38.604602 | orchestrator | 2025-06-22 19:41:38.605247 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-06-22 19:41:38.606152 | orchestrator | 2025-06-22 19:41:38.606884 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-06-22 19:41:38.607592 | orchestrator | Sunday 22 June 2025 19:41:38 +0000 (0:00:00.632) 0:00:18.237 *********** 2025-06-22 19:41:41.312227 | orchestrator | ok: [testbed-manager] 2025-06-22 19:41:41.312330 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:41:41.312343 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:41:41.312409 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:41:41.312422 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:41:41.316006 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:41:41.316055 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:41:41.316667 | orchestrator | 2025-06-22 19:41:41.317767 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:41:41.317881 | orchestrator | 2025-06-22 19:41:41 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-22 19:41:41.318220 | orchestrator | 2025-06-22 19:41:41 | INFO  | Please wait and do not abort execution. 2025-06-22 19:41:41.319377 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-22 19:41:41.319773 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:41:41.320605 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:41:41.321352 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:41:41.322825 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:41:41.323790 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:41:41.324682 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:41:41.325630 | orchestrator | 2025-06-22 19:41:41.326376 | orchestrator | 2025-06-22 19:41:41.327098 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:41:41.328052 | orchestrator | Sunday 22 June 2025 19:41:41 +0000 (0:00:02.708) 0:00:20.946 *********** 2025-06-22 19:41:41.329005 | orchestrator | =============================================================================== 2025-06-22 19:41:41.329645 | orchestrator | Run update-ca-certificates ---------------------------------------------- 4.08s 2025-06-22 19:41:41.330398 | orchestrator | Install python3-docker -------------------------------------------------- 2.71s 2025-06-22 19:41:41.330988 | orchestrator | Apply netplan configuration --------------------------------------------- 2.22s 2025-06-22 19:41:41.331673 | orchestrator | Apply netplan configuration --------------------------------------------- 1.79s 2025-06-22 19:41:41.332123 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.71s 2025-06-22 19:41:41.332567 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.63s 2025-06-22 19:41:41.333288 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.62s 2025-06-22 19:41:41.333720 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.54s 2025-06-22 19:41:41.334183 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.50s 2025-06-22 19:41:41.334957 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.69s 2025-06-22 19:41:41.335215 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.68s 2025-06-22 19:41:41.335724 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.63s 2025-06-22 19:41:41.904280 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-06-22 19:41:43.555605 | orchestrator | Registering Redlock._acquired_script 2025-06-22 19:41:43.555700 | orchestrator | Registering Redlock._extend_script 2025-06-22 19:41:43.555715 | orchestrator | Registering Redlock._release_script 2025-06-22 19:41:43.606944 | orchestrator | 2025-06-22 19:41:43 | INFO  | Task e99e2f84-4193-42bb-b85e-71e4a153b24a (reboot) was prepared for execution. 2025-06-22 19:41:43.607019 | orchestrator | 2025-06-22 19:41:43 | INFO  | It takes a moment until task e99e2f84-4193-42bb-b85e-71e4a153b24a (reboot) has been started and output is visible here. 2025-06-22 19:41:47.239075 | orchestrator | 2025-06-22 19:41:47.240020 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-22 19:41:47.240894 | orchestrator | 2025-06-22 19:41:47.243040 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-22 19:41:47.244407 | orchestrator | Sunday 22 June 2025 19:41:47 +0000 (0:00:00.197) 0:00:00.197 *********** 2025-06-22 19:41:47.325985 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:41:47.326239 | orchestrator | 2025-06-22 19:41:47.327028 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-22 19:41:47.327624 | orchestrator | Sunday 22 June 2025 19:41:47 +0000 (0:00:00.090) 0:00:00.287 *********** 2025-06-22 19:41:48.215082 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:41:48.216135 | orchestrator | 2025-06-22 19:41:48.218688 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-22 19:41:48.219315 | orchestrator | Sunday 22 June 2025 19:41:48 +0000 (0:00:00.888) 0:00:01.175 *********** 2025-06-22 19:41:48.332078 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:41:48.333250 | orchestrator | 2025-06-22 19:41:48.333739 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-22 19:41:48.334771 | orchestrator | 2025-06-22 19:41:48.335769 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-22 19:41:48.336559 | orchestrator | Sunday 22 June 2025 19:41:48 +0000 (0:00:00.113) 0:00:01.288 *********** 2025-06-22 19:41:48.424288 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:41:48.424368 | orchestrator | 2025-06-22 19:41:48.425209 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-22 19:41:48.425858 | orchestrator | Sunday 22 June 2025 19:41:48 +0000 (0:00:00.095) 0:00:01.383 *********** 2025-06-22 19:41:49.045299 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:41:49.045961 | orchestrator | 2025-06-22 19:41:49.046727 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-22 19:41:49.047584 | orchestrator | Sunday 22 June 2025 19:41:49 +0000 (0:00:00.621) 0:00:02.005 *********** 2025-06-22 19:41:49.142231 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:41:49.142897 | orchestrator | 2025-06-22 19:41:49.144488 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-22 19:41:49.144656 | orchestrator | 2025-06-22 19:41:49.145471 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-22 19:41:49.146581 | orchestrator | Sunday 22 June 2025 19:41:49 +0000 (0:00:00.097) 0:00:02.102 *********** 2025-06-22 19:41:49.302626 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:41:49.303447 | orchestrator | 2025-06-22 19:41:49.304640 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-22 19:41:49.306126 | orchestrator | Sunday 22 June 2025 19:41:49 +0000 (0:00:00.160) 0:00:02.263 *********** 2025-06-22 19:41:49.974996 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:41:49.976200 | orchestrator | 2025-06-22 19:41:49.976243 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-22 19:41:49.976648 | orchestrator | Sunday 22 June 2025 19:41:49 +0000 (0:00:00.672) 0:00:02.935 *********** 2025-06-22 19:41:50.081554 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:41:50.082693 | orchestrator | 2025-06-22 19:41:50.083433 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-22 19:41:50.084169 | orchestrator | 2025-06-22 19:41:50.084962 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-22 19:41:50.085611 | orchestrator | Sunday 22 June 2025 19:41:50 +0000 (0:00:00.104) 0:00:03.040 *********** 2025-06-22 19:41:50.182802 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:41:50.184033 | orchestrator | 2025-06-22 19:41:50.184755 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-22 19:41:50.185731 | orchestrator | Sunday 22 June 2025 19:41:50 +0000 (0:00:00.102) 0:00:03.143 *********** 2025-06-22 19:41:50.842928 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:41:50.843947 | orchestrator | 2025-06-22 19:41:50.844734 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-22 19:41:50.845861 | orchestrator | Sunday 22 June 2025 19:41:50 +0000 (0:00:00.658) 0:00:03.802 *********** 2025-06-22 19:41:50.953681 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:41:50.954123 | orchestrator | 2025-06-22 19:41:50.955657 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-22 19:41:50.956387 | orchestrator | 2025-06-22 19:41:50.957249 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-22 19:41:50.958167 | orchestrator | Sunday 22 June 2025 19:41:50 +0000 (0:00:00.111) 0:00:03.914 *********** 2025-06-22 19:41:51.044655 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:41:51.045277 | orchestrator | 2025-06-22 19:41:51.046118 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-22 19:41:51.046820 | orchestrator | Sunday 22 June 2025 19:41:51 +0000 (0:00:00.091) 0:00:04.005 *********** 2025-06-22 19:41:51.688488 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:41:51.689082 | orchestrator | 2025-06-22 19:41:51.690125 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-22 19:41:51.690561 | orchestrator | Sunday 22 June 2025 19:41:51 +0000 (0:00:00.642) 0:00:04.647 *********** 2025-06-22 19:41:51.797895 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:41:51.798469 | orchestrator | 2025-06-22 19:41:51.799077 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-06-22 19:41:51.799760 | orchestrator | 2025-06-22 19:41:51.800482 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-06-22 19:41:51.801208 | orchestrator | Sunday 22 June 2025 19:41:51 +0000 (0:00:00.111) 0:00:04.758 *********** 2025-06-22 19:41:51.887489 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:41:51.887602 | orchestrator | 2025-06-22 19:41:51.888320 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-06-22 19:41:51.888851 | orchestrator | Sunday 22 June 2025 19:41:51 +0000 (0:00:00.088) 0:00:04.846 *********** 2025-06-22 19:41:52.567078 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:41:52.567230 | orchestrator | 2025-06-22 19:41:52.568079 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-06-22 19:41:52.569036 | orchestrator | Sunday 22 June 2025 19:41:52 +0000 (0:00:00.679) 0:00:05.526 *********** 2025-06-22 19:41:52.596309 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:41:52.596445 | orchestrator | 2025-06-22 19:41:52.597203 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:41:52.598675 | orchestrator | 2025-06-22 19:41:52 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-22 19:41:52.598711 | orchestrator | 2025-06-22 19:41:52 | INFO  | Please wait and do not abort execution. 2025-06-22 19:41:52.599712 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:41:52.600732 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:41:52.601404 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:41:52.602145 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:41:52.603057 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:41:52.603269 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:41:52.603788 | orchestrator | 2025-06-22 19:41:52.604383 | orchestrator | 2025-06-22 19:41:52.605874 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:41:52.606631 | orchestrator | Sunday 22 June 2025 19:41:52 +0000 (0:00:00.031) 0:00:05.557 *********** 2025-06-22 19:41:52.606880 | orchestrator | =============================================================================== 2025-06-22 19:41:52.607300 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.16s 2025-06-22 19:41:52.607777 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.63s 2025-06-22 19:41:52.608203 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.57s 2025-06-22 19:41:52.977934 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-06-22 19:41:54.546940 | orchestrator | Registering Redlock._acquired_script 2025-06-22 19:41:54.547028 | orchestrator | Registering Redlock._extend_script 2025-06-22 19:41:54.547043 | orchestrator | Registering Redlock._release_script 2025-06-22 19:41:54.600340 | orchestrator | 2025-06-22 19:41:54 | INFO  | Task b748d3db-c650-44c9-aa22-0a55fdf89970 (wait-for-connection) was prepared for execution. 2025-06-22 19:41:54.600454 | orchestrator | 2025-06-22 19:41:54 | INFO  | It takes a moment until task b748d3db-c650-44c9-aa22-0a55fdf89970 (wait-for-connection) has been started and output is visible here. 2025-06-22 19:41:58.283958 | orchestrator | 2025-06-22 19:41:58.284622 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-06-22 19:41:58.285616 | orchestrator | 2025-06-22 19:41:58.287060 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-06-22 19:41:58.287765 | orchestrator | Sunday 22 June 2025 19:41:58 +0000 (0:00:00.210) 0:00:00.210 *********** 2025-06-22 19:42:12.128342 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:42:12.128468 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:42:12.129027 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:42:12.132362 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:42:12.132391 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:42:12.133363 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:42:12.134803 | orchestrator | 2025-06-22 19:42:12.136064 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:42:12.136642 | orchestrator | 2025-06-22 19:42:12 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-22 19:42:12.136819 | orchestrator | 2025-06-22 19:42:12 | INFO  | Please wait and do not abort execution. 2025-06-22 19:42:12.137895 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:42:12.138622 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:42:12.139248 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:42:12.139934 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:42:12.140587 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:42:12.141614 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:42:12.142141 | orchestrator | 2025-06-22 19:42:12.143274 | orchestrator | 2025-06-22 19:42:12.143847 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:42:12.144273 | orchestrator | Sunday 22 June 2025 19:42:12 +0000 (0:00:13.842) 0:00:14.053 *********** 2025-06-22 19:42:12.144913 | orchestrator | =============================================================================== 2025-06-22 19:42:12.145298 | orchestrator | Wait until remote system is reachable ---------------------------------- 13.84s 2025-06-22 19:42:12.673769 | orchestrator | + osism apply hddtemp 2025-06-22 19:42:14.378626 | orchestrator | Registering Redlock._acquired_script 2025-06-22 19:42:14.378728 | orchestrator | Registering Redlock._extend_script 2025-06-22 19:42:14.378743 | orchestrator | Registering Redlock._release_script 2025-06-22 19:42:14.436925 | orchestrator | 2025-06-22 19:42:14 | INFO  | Task 02a606a7-4e24-4c30-a85a-9f9a5831cca6 (hddtemp) was prepared for execution. 2025-06-22 19:42:14.437024 | orchestrator | 2025-06-22 19:42:14 | INFO  | It takes a moment until task 02a606a7-4e24-4c30-a85a-9f9a5831cca6 (hddtemp) has been started and output is visible here. 2025-06-22 19:42:18.174686 | orchestrator | 2025-06-22 19:42:18.176263 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-06-22 19:42:18.176640 | orchestrator | 2025-06-22 19:42:18.177669 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-06-22 19:42:18.178658 | orchestrator | Sunday 22 June 2025 19:42:18 +0000 (0:00:00.233) 0:00:00.233 *********** 2025-06-22 19:42:18.310209 | orchestrator | ok: [testbed-manager] 2025-06-22 19:42:18.378434 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:42:18.446863 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:42:18.516887 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:42:18.655151 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:42:18.770217 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:42:18.771542 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:42:18.772603 | orchestrator | 2025-06-22 19:42:18.773603 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-06-22 19:42:18.774916 | orchestrator | Sunday 22 June 2025 19:42:18 +0000 (0:00:00.596) 0:00:00.829 *********** 2025-06-22 19:42:19.794795 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 19:42:19.797336 | orchestrator | 2025-06-22 19:42:19.797403 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-06-22 19:42:19.797428 | orchestrator | Sunday 22 June 2025 19:42:19 +0000 (0:00:01.023) 0:00:01.853 *********** 2025-06-22 19:42:21.755427 | orchestrator | ok: [testbed-manager] 2025-06-22 19:42:21.756473 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:42:21.757188 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:42:21.758229 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:42:21.760253 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:42:21.760647 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:42:21.761760 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:42:21.762258 | orchestrator | 2025-06-22 19:42:21.762807 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-06-22 19:42:21.763593 | orchestrator | Sunday 22 June 2025 19:42:21 +0000 (0:00:01.962) 0:00:03.815 *********** 2025-06-22 19:42:22.266662 | orchestrator | changed: [testbed-manager] 2025-06-22 19:42:22.345892 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:42:22.787687 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:42:22.788884 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:42:22.789770 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:42:22.790611 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:42:22.792649 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:42:22.793186 | orchestrator | 2025-06-22 19:42:22.793717 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-06-22 19:42:22.795081 | orchestrator | Sunday 22 June 2025 19:42:22 +0000 (0:00:01.029) 0:00:04.844 *********** 2025-06-22 19:42:23.786825 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:42:23.789647 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:42:23.791849 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:42:23.793622 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:42:23.794750 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:42:23.795685 | orchestrator | ok: [testbed-manager] 2025-06-22 19:42:23.796765 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:42:23.797438 | orchestrator | 2025-06-22 19:42:23.798474 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-06-22 19:42:23.799383 | orchestrator | Sunday 22 June 2025 19:42:23 +0000 (0:00:01.003) 0:00:05.847 *********** 2025-06-22 19:42:24.143856 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:42:24.220254 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:42:24.292542 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:42:24.361131 | orchestrator | changed: [testbed-manager] 2025-06-22 19:42:24.474334 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:42:24.474982 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:42:24.478835 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:42:24.478869 | orchestrator | 2025-06-22 19:42:24.478882 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-06-22 19:42:24.478895 | orchestrator | Sunday 22 June 2025 19:42:24 +0000 (0:00:00.686) 0:00:06.534 *********** 2025-06-22 19:42:36.241634 | orchestrator | changed: [testbed-manager] 2025-06-22 19:42:36.241970 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:42:36.242000 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:42:36.243754 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:42:36.244676 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:42:36.245339 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:42:36.246585 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:42:36.247005 | orchestrator | 2025-06-22 19:42:36.247422 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-06-22 19:42:36.248081 | orchestrator | Sunday 22 June 2025 19:42:36 +0000 (0:00:11.763) 0:00:18.297 *********** 2025-06-22 19:42:37.604775 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 19:42:37.608054 | orchestrator | 2025-06-22 19:42:37.608095 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-06-22 19:42:37.608688 | orchestrator | Sunday 22 June 2025 19:42:37 +0000 (0:00:01.363) 0:00:19.661 *********** 2025-06-22 19:42:39.550872 | orchestrator | changed: [testbed-manager] 2025-06-22 19:42:39.551904 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:42:39.553109 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:42:39.555171 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:42:39.556733 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:42:39.557222 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:42:39.558108 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:42:39.559072 | orchestrator | 2025-06-22 19:42:39.559711 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:42:39.560591 | orchestrator | 2025-06-22 19:42:39 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-22 19:42:39.560682 | orchestrator | 2025-06-22 19:42:39 | INFO  | Please wait and do not abort execution. 2025-06-22 19:42:39.561915 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:42:39.562722 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-22 19:42:39.563596 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-22 19:42:39.564285 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-22 19:42:39.564989 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-22 19:42:39.565628 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-22 19:42:39.566823 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-22 19:42:39.567086 | orchestrator | 2025-06-22 19:42:39.567713 | orchestrator | 2025-06-22 19:42:39.568312 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:42:39.568777 | orchestrator | Sunday 22 June 2025 19:42:39 +0000 (0:00:01.949) 0:00:21.610 *********** 2025-06-22 19:42:39.569356 | orchestrator | =============================================================================== 2025-06-22 19:42:39.569830 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 11.76s 2025-06-22 19:42:39.570098 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.96s 2025-06-22 19:42:39.570511 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.95s 2025-06-22 19:42:39.571098 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.36s 2025-06-22 19:42:39.571329 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.03s 2025-06-22 19:42:39.572137 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.02s 2025-06-22 19:42:39.573203 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.00s 2025-06-22 19:42:39.574273 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.69s 2025-06-22 19:42:39.575001 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.60s 2025-06-22 19:42:40.200618 | orchestrator | ++ semver 9.1.0 7.1.1 2025-06-22 19:42:40.262198 | orchestrator | + [[ 1 -ge 0 ]] 2025-06-22 19:42:40.262288 | orchestrator | + sudo systemctl restart manager.service 2025-06-22 19:42:53.919474 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-06-22 19:42:53.919649 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-06-22 19:42:53.919666 | orchestrator | + local max_attempts=60 2025-06-22 19:42:53.919680 | orchestrator | + local name=ceph-ansible 2025-06-22 19:42:53.919691 | orchestrator | + local attempt_num=1 2025-06-22 19:42:53.919702 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-22 19:42:53.956538 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-06-22 19:42:53.956624 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-22 19:42:53.956637 | orchestrator | + sleep 5 2025-06-22 19:42:58.962808 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-22 19:42:59.002339 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-06-22 19:42:59.002442 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-22 19:42:59.002456 | orchestrator | + sleep 5 2025-06-22 19:43:04.006010 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-22 19:43:04.043657 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-06-22 19:43:04.043758 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-22 19:43:04.043780 | orchestrator | + sleep 5 2025-06-22 19:43:09.049196 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-22 19:43:09.087880 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-06-22 19:43:09.087990 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-22 19:43:09.088006 | orchestrator | + sleep 5 2025-06-22 19:43:14.092473 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-22 19:43:14.133952 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-06-22 19:43:14.134100 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-22 19:43:14.134117 | orchestrator | + sleep 5 2025-06-22 19:43:19.139038 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-22 19:43:19.173454 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-06-22 19:43:19.173589 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-22 19:43:19.173606 | orchestrator | + sleep 5 2025-06-22 19:43:24.177989 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-22 19:43:24.217977 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-06-22 19:43:24.218084 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-22 19:43:24.218093 | orchestrator | + sleep 5 2025-06-22 19:43:29.225816 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-22 19:43:29.272459 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-06-22 19:43:29.272567 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-22 19:43:29.272616 | orchestrator | + sleep 5 2025-06-22 19:43:34.274645 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-22 19:43:34.319309 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-06-22 19:43:34.319384 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-22 19:43:34.319399 | orchestrator | + sleep 5 2025-06-22 19:43:39.324018 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-22 19:43:39.365049 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-06-22 19:43:39.365129 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-22 19:43:39.365138 | orchestrator | + sleep 5 2025-06-22 19:43:44.370281 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-22 19:43:44.410984 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-06-22 19:43:44.411075 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-22 19:43:44.411088 | orchestrator | + sleep 5 2025-06-22 19:43:49.417603 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-22 19:43:49.460252 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-06-22 19:43:49.460357 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-22 19:43:49.460373 | orchestrator | + sleep 5 2025-06-22 19:43:54.465901 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-22 19:43:54.506407 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-06-22 19:43:54.506504 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-06-22 19:43:54.506517 | orchestrator | + sleep 5 2025-06-22 19:43:59.511288 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-06-22 19:43:59.546947 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-22 19:43:59.547025 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-06-22 19:43:59.547067 | orchestrator | + local max_attempts=60 2025-06-22 19:43:59.547080 | orchestrator | + local name=kolla-ansible 2025-06-22 19:43:59.547091 | orchestrator | + local attempt_num=1 2025-06-22 19:43:59.547490 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-06-22 19:43:59.581115 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-22 19:43:59.581192 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-06-22 19:43:59.581204 | orchestrator | + local max_attempts=60 2025-06-22 19:43:59.581214 | orchestrator | + local name=osism-ansible 2025-06-22 19:43:59.581223 | orchestrator | + local attempt_num=1 2025-06-22 19:43:59.582158 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-06-22 19:43:59.620945 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-06-22 19:43:59.621018 | orchestrator | + [[ true == \t\r\u\e ]] 2025-06-22 19:43:59.621031 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-06-22 19:43:59.796567 | orchestrator | ARA in ceph-ansible already disabled. 2025-06-22 19:43:59.943227 | orchestrator | ARA in kolla-ansible already disabled. 2025-06-22 19:44:00.106339 | orchestrator | ARA in osism-ansible already disabled. 2025-06-22 19:44:00.262399 | orchestrator | ARA in osism-kubernetes already disabled. 2025-06-22 19:44:00.262624 | orchestrator | + osism apply gather-facts 2025-06-22 19:44:02.029063 | orchestrator | Registering Redlock._acquired_script 2025-06-22 19:44:02.029165 | orchestrator | Registering Redlock._extend_script 2025-06-22 19:44:02.029180 | orchestrator | Registering Redlock._release_script 2025-06-22 19:44:02.089580 | orchestrator | 2025-06-22 19:44:02 | INFO  | Task 1a04a7c1-152f-4feb-add5-648d4d3c768d (gather-facts) was prepared for execution. 2025-06-22 19:44:02.089722 | orchestrator | 2025-06-22 19:44:02 | INFO  | It takes a moment until task 1a04a7c1-152f-4feb-add5-648d4d3c768d (gather-facts) has been started and output is visible here. 2025-06-22 19:44:06.199257 | orchestrator | 2025-06-22 19:44:06.202499 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-22 19:44:06.204353 | orchestrator | 2025-06-22 19:44:06.206383 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-22 19:44:06.206599 | orchestrator | Sunday 22 June 2025 19:44:06 +0000 (0:00:00.216) 0:00:00.216 *********** 2025-06-22 19:44:12.197768 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:44:12.198870 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:44:12.202954 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:44:12.204191 | orchestrator | ok: [testbed-manager] 2025-06-22 19:44:12.207083 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:44:12.208506 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:44:12.210498 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:44:12.211760 | orchestrator | 2025-06-22 19:44:12.213166 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-06-22 19:44:12.214844 | orchestrator | 2025-06-22 19:44:12.216847 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-06-22 19:44:12.218097 | orchestrator | Sunday 22 June 2025 19:44:12 +0000 (0:00:06.002) 0:00:06.218 *********** 2025-06-22 19:44:12.372967 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:44:12.450466 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:44:12.526793 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:44:12.602619 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:44:12.680121 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:44:12.724223 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:44:12.726378 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:44:12.728506 | orchestrator | 2025-06-22 19:44:12.729705 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:44:12.730658 | orchestrator | 2025-06-22 19:44:12 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-22 19:44:12.731471 | orchestrator | 2025-06-22 19:44:12 | INFO  | Please wait and do not abort execution. 2025-06-22 19:44:12.732944 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-22 19:44:12.734166 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-22 19:44:12.735188 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-22 19:44:12.736378 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-22 19:44:12.737423 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-22 19:44:12.738694 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-22 19:44:12.739333 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-22 19:44:12.740264 | orchestrator | 2025-06-22 19:44:12.740883 | orchestrator | 2025-06-22 19:44:12.741415 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:44:12.741909 | orchestrator | Sunday 22 June 2025 19:44:12 +0000 (0:00:00.526) 0:00:06.745 *********** 2025-06-22 19:44:12.742314 | orchestrator | =============================================================================== 2025-06-22 19:44:12.742784 | orchestrator | Gathers facts about hosts ----------------------------------------------- 6.00s 2025-06-22 19:44:12.743412 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.53s 2025-06-22 19:44:13.340467 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-06-22 19:44:13.361069 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-06-22 19:44:13.375178 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-06-22 19:44:13.395835 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-06-22 19:44:13.410269 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-06-22 19:44:13.421325 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-06-22 19:44:13.431919 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-06-22 19:44:13.446407 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-06-22 19:44:13.460061 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-06-22 19:44:13.476834 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-06-22 19:44:13.492532 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-06-22 19:44:13.507889 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-06-22 19:44:13.523765 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-06-22 19:44:13.535955 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-06-22 19:44:13.547803 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-06-22 19:44:13.561381 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-06-22 19:44:13.581059 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-06-22 19:44:13.593850 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-06-22 19:44:13.606149 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-06-22 19:44:13.616831 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-06-22 19:44:13.629127 | orchestrator | + [[ false == \t\r\u\e ]] 2025-06-22 19:44:13.824808 | orchestrator | ok: Runtime: 0:20:14.536352 2025-06-22 19:44:13.938124 | 2025-06-22 19:44:13.938273 | TASK [Deploy services] 2025-06-22 19:44:14.478757 | orchestrator | skipping: Conditional result was False 2025-06-22 19:44:14.496073 | 2025-06-22 19:44:14.496242 | TASK [Deploy in a nutshell] 2025-06-22 19:44:15.148605 | orchestrator | + set -e 2025-06-22 19:44:15.150143 | orchestrator | 2025-06-22 19:44:15.150169 | orchestrator | # PULL IMAGES 2025-06-22 19:44:15.150175 | orchestrator | 2025-06-22 19:44:15.150182 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-22 19:44:15.150190 | orchestrator | ++ export INTERACTIVE=false 2025-06-22 19:44:15.150196 | orchestrator | ++ INTERACTIVE=false 2025-06-22 19:44:15.150215 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-22 19:44:15.150224 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-22 19:44:15.150229 | orchestrator | + source /opt/manager-vars.sh 2025-06-22 19:44:15.150233 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-06-22 19:44:15.150240 | orchestrator | ++ NUMBER_OF_NODES=6 2025-06-22 19:44:15.150244 | orchestrator | ++ export CEPH_VERSION=reef 2025-06-22 19:44:15.150251 | orchestrator | ++ CEPH_VERSION=reef 2025-06-22 19:44:15.150255 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-06-22 19:44:15.150262 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-06-22 19:44:15.150266 | orchestrator | ++ export MANAGER_VERSION=9.1.0 2025-06-22 19:44:15.150271 | orchestrator | ++ MANAGER_VERSION=9.1.0 2025-06-22 19:44:15.150275 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-06-22 19:44:15.150280 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-06-22 19:44:15.150284 | orchestrator | ++ export ARA=false 2025-06-22 19:44:15.150287 | orchestrator | ++ ARA=false 2025-06-22 19:44:15.150291 | orchestrator | ++ export DEPLOY_MODE=manager 2025-06-22 19:44:15.150295 | orchestrator | ++ DEPLOY_MODE=manager 2025-06-22 19:44:15.150299 | orchestrator | ++ export TEMPEST=false 2025-06-22 19:44:15.150303 | orchestrator | ++ TEMPEST=false 2025-06-22 19:44:15.150306 | orchestrator | ++ export IS_ZUUL=true 2025-06-22 19:44:15.150310 | orchestrator | ++ IS_ZUUL=true 2025-06-22 19:44:15.150314 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.14 2025-06-22 19:44:15.150318 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.14 2025-06-22 19:44:15.150322 | orchestrator | ++ export EXTERNAL_API=false 2025-06-22 19:44:15.150338 | orchestrator | ++ EXTERNAL_API=false 2025-06-22 19:44:15.150342 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-06-22 19:44:15.150346 | orchestrator | ++ IMAGE_USER=ubuntu 2025-06-22 19:44:15.150350 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-06-22 19:44:15.150353 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-06-22 19:44:15.150357 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-06-22 19:44:15.150364 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-06-22 19:44:15.150368 | orchestrator | + echo 2025-06-22 19:44:15.150372 | orchestrator | + echo '# PULL IMAGES' 2025-06-22 19:44:15.150376 | orchestrator | + echo 2025-06-22 19:44:15.150384 | orchestrator | ++ semver 9.1.0 7.0.0 2025-06-22 19:44:15.209923 | orchestrator | + [[ 1 -ge 0 ]] 2025-06-22 19:44:15.209974 | orchestrator | + osism apply -r 2 -e custom pull-images 2025-06-22 19:44:16.657686 | orchestrator | 2025-06-22 19:44:16 | INFO  | Trying to run play pull-images in environment custom 2025-06-22 19:44:16.662074 | orchestrator | Registering Redlock._acquired_script 2025-06-22 19:44:16.662092 | orchestrator | Registering Redlock._extend_script 2025-06-22 19:44:16.662097 | orchestrator | Registering Redlock._release_script 2025-06-22 19:44:16.724624 | orchestrator | 2025-06-22 19:44:16 | INFO  | Task 3a168d7a-a568-45c2-a3bc-55b2abdb6914 (pull-images) was prepared for execution. 2025-06-22 19:44:16.724690 | orchestrator | 2025-06-22 19:44:16 | INFO  | It takes a moment until task 3a168d7a-a568-45c2-a3bc-55b2abdb6914 (pull-images) has been started and output is visible here. 2025-06-22 19:44:20.375756 | orchestrator | 2025-06-22 19:44:20.376632 | orchestrator | PLAY [Pull images] ************************************************************* 2025-06-22 19:44:20.381428 | orchestrator | 2025-06-22 19:44:20.381468 | orchestrator | TASK [Pull keystone image] ***************************************************** 2025-06-22 19:44:20.381477 | orchestrator | Sunday 22 June 2025 19:44:20 +0000 (0:00:00.149) 0:00:00.149 *********** 2025-06-22 19:45:28.683452 | orchestrator | changed: [testbed-manager] 2025-06-22 19:45:28.683563 | orchestrator | 2025-06-22 19:45:28.683590 | orchestrator | TASK [Pull other images] ******************************************************* 2025-06-22 19:45:28.683969 | orchestrator | Sunday 22 June 2025 19:45:28 +0000 (0:01:08.308) 0:01:08.457 *********** 2025-06-22 19:46:20.510592 | orchestrator | changed: [testbed-manager] => (item=aodh) 2025-06-22 19:46:20.510706 | orchestrator | changed: [testbed-manager] => (item=barbican) 2025-06-22 19:46:20.510720 | orchestrator | changed: [testbed-manager] => (item=ceilometer) 2025-06-22 19:46:20.511017 | orchestrator | changed: [testbed-manager] => (item=cinder) 2025-06-22 19:46:20.512194 | orchestrator | changed: [testbed-manager] => (item=common) 2025-06-22 19:46:20.513231 | orchestrator | changed: [testbed-manager] => (item=designate) 2025-06-22 19:46:20.514396 | orchestrator | changed: [testbed-manager] => (item=glance) 2025-06-22 19:46:20.514962 | orchestrator | changed: [testbed-manager] => (item=grafana) 2025-06-22 19:46:20.515673 | orchestrator | changed: [testbed-manager] => (item=horizon) 2025-06-22 19:46:20.516418 | orchestrator | changed: [testbed-manager] => (item=ironic) 2025-06-22 19:46:20.517456 | orchestrator | changed: [testbed-manager] => (item=loadbalancer) 2025-06-22 19:46:20.518750 | orchestrator | changed: [testbed-manager] => (item=magnum) 2025-06-22 19:46:20.520020 | orchestrator | changed: [testbed-manager] => (item=mariadb) 2025-06-22 19:46:20.521000 | orchestrator | changed: [testbed-manager] => (item=memcached) 2025-06-22 19:46:20.521792 | orchestrator | changed: [testbed-manager] => (item=neutron) 2025-06-22 19:46:20.525126 | orchestrator | changed: [testbed-manager] => (item=nova) 2025-06-22 19:46:20.526151 | orchestrator | changed: [testbed-manager] => (item=octavia) 2025-06-22 19:46:20.527167 | orchestrator | changed: [testbed-manager] => (item=opensearch) 2025-06-22 19:46:20.527847 | orchestrator | changed: [testbed-manager] => (item=openvswitch) 2025-06-22 19:46:20.528850 | orchestrator | changed: [testbed-manager] => (item=ovn) 2025-06-22 19:46:20.529581 | orchestrator | changed: [testbed-manager] => (item=placement) 2025-06-22 19:46:20.530367 | orchestrator | changed: [testbed-manager] => (item=rabbitmq) 2025-06-22 19:46:20.531127 | orchestrator | changed: [testbed-manager] => (item=redis) 2025-06-22 19:46:20.531519 | orchestrator | changed: [testbed-manager] => (item=skyline) 2025-06-22 19:46:20.531883 | orchestrator | 2025-06-22 19:46:20.533343 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:46:20.533365 | orchestrator | 2025-06-22 19:46:20 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-22 19:46:20.533380 | orchestrator | 2025-06-22 19:46:20 | INFO  | Please wait and do not abort execution. 2025-06-22 19:46:20.534453 | orchestrator | testbed-manager : ok=2  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:46:20.535057 | orchestrator | 2025-06-22 19:46:20.535714 | orchestrator | 2025-06-22 19:46:20.536518 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:46:20.536951 | orchestrator | Sunday 22 June 2025 19:46:20 +0000 (0:00:51.824) 0:02:00.282 *********** 2025-06-22 19:46:20.537702 | orchestrator | =============================================================================== 2025-06-22 19:46:20.538351 | orchestrator | Pull keystone image ---------------------------------------------------- 68.31s 2025-06-22 19:46:20.538814 | orchestrator | Pull other images ------------------------------------------------------ 51.82s 2025-06-22 19:46:22.859456 | orchestrator | 2025-06-22 19:46:22 | INFO  | Trying to run play wipe-partitions in environment custom 2025-06-22 19:46:22.864002 | orchestrator | Registering Redlock._acquired_script 2025-06-22 19:46:22.864033 | orchestrator | Registering Redlock._extend_script 2025-06-22 19:46:22.864221 | orchestrator | Registering Redlock._release_script 2025-06-22 19:46:22.930736 | orchestrator | 2025-06-22 19:46:22 | INFO  | Task 04806daf-0fd4-44a0-953c-0394641d653a (wipe-partitions) was prepared for execution. 2025-06-22 19:46:22.930803 | orchestrator | 2025-06-22 19:46:22 | INFO  | It takes a moment until task 04806daf-0fd4-44a0-953c-0394641d653a (wipe-partitions) has been started and output is visible here. 2025-06-22 19:46:27.156398 | orchestrator | 2025-06-22 19:46:27.157943 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-06-22 19:46:27.158249 | orchestrator | 2025-06-22 19:46:27.160985 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-06-22 19:46:27.161172 | orchestrator | Sunday 22 June 2025 19:46:27 +0000 (0:00:00.146) 0:00:00.146 *********** 2025-06-22 19:46:27.744395 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:46:27.744521 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:46:27.744536 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:46:27.744781 | orchestrator | 2025-06-22 19:46:27.745108 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-06-22 19:46:27.745464 | orchestrator | Sunday 22 June 2025 19:46:27 +0000 (0:00:00.587) 0:00:00.733 *********** 2025-06-22 19:46:27.912144 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:46:28.014337 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:46:28.016220 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:46:28.016252 | orchestrator | 2025-06-22 19:46:28.016652 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-06-22 19:46:28.016753 | orchestrator | Sunday 22 June 2025 19:46:28 +0000 (0:00:00.266) 0:00:01.000 *********** 2025-06-22 19:46:28.782286 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:46:28.784688 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:46:28.785130 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:46:28.785642 | orchestrator | 2025-06-22 19:46:28.785938 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-06-22 19:46:28.786451 | orchestrator | Sunday 22 June 2025 19:46:28 +0000 (0:00:00.771) 0:00:01.771 *********** 2025-06-22 19:46:28.988340 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:46:29.081050 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:46:29.081231 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:46:29.081586 | orchestrator | 2025-06-22 19:46:29.081989 | orchestrator | TASK [Check device availability] *********************************************** 2025-06-22 19:46:29.085666 | orchestrator | Sunday 22 June 2025 19:46:29 +0000 (0:00:00.299) 0:00:02.070 *********** 2025-06-22 19:46:30.314676 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-06-22 19:46:30.314772 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-06-22 19:46:30.316820 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-06-22 19:46:30.316889 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-06-22 19:46:30.317547 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-06-22 19:46:30.320424 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-06-22 19:46:30.320675 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-06-22 19:46:30.321092 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-06-22 19:46:30.321559 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-06-22 19:46:30.324778 | orchestrator | 2025-06-22 19:46:30.325046 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-06-22 19:46:30.325385 | orchestrator | Sunday 22 June 2025 19:46:30 +0000 (0:00:01.235) 0:00:03.306 *********** 2025-06-22 19:46:31.778953 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-06-22 19:46:31.780589 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-06-22 19:46:31.780793 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-06-22 19:46:31.781109 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-06-22 19:46:31.784661 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-06-22 19:46:31.785020 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-06-22 19:46:31.785342 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-06-22 19:46:31.786150 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-06-22 19:46:31.786488 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-06-22 19:46:31.789876 | orchestrator | 2025-06-22 19:46:31.790270 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-06-22 19:46:31.790631 | orchestrator | Sunday 22 June 2025 19:46:31 +0000 (0:00:01.459) 0:00:04.765 *********** 2025-06-22 19:46:34.157501 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-06-22 19:46:34.157588 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-06-22 19:46:34.158132 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-06-22 19:46:34.158669 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-06-22 19:46:34.162485 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-06-22 19:46:34.163084 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-06-22 19:46:34.163679 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-06-22 19:46:34.164133 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-06-22 19:46:34.164559 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-06-22 19:46:34.165102 | orchestrator | 2025-06-22 19:46:34.165626 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-06-22 19:46:34.166143 | orchestrator | Sunday 22 June 2025 19:46:34 +0000 (0:00:02.383) 0:00:07.148 *********** 2025-06-22 19:46:34.785069 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:46:34.785172 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:46:34.785186 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:46:34.785198 | orchestrator | 2025-06-22 19:46:34.785210 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-06-22 19:46:34.785223 | orchestrator | Sunday 22 June 2025 19:46:34 +0000 (0:00:00.619) 0:00:07.768 *********** 2025-06-22 19:46:35.463237 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:46:35.463342 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:46:35.463724 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:46:35.464520 | orchestrator | 2025-06-22 19:46:35.467838 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:46:35.467963 | orchestrator | 2025-06-22 19:46:35 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-22 19:46:35.468572 | orchestrator | 2025-06-22 19:46:35 | INFO  | Please wait and do not abort execution. 2025-06-22 19:46:35.469328 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:46:35.469767 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:46:35.470535 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:46:35.471830 | orchestrator | 2025-06-22 19:46:35.472122 | orchestrator | 2025-06-22 19:46:35.472691 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:46:35.473156 | orchestrator | Sunday 22 June 2025 19:46:35 +0000 (0:00:00.683) 0:00:08.451 *********** 2025-06-22 19:46:35.473565 | orchestrator | =============================================================================== 2025-06-22 19:46:35.473991 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.38s 2025-06-22 19:46:35.474661 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.46s 2025-06-22 19:46:35.477819 | orchestrator | Check device availability ----------------------------------------------- 1.24s 2025-06-22 19:46:35.480672 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.77s 2025-06-22 19:46:35.480908 | orchestrator | Request device events from the kernel ----------------------------------- 0.68s 2025-06-22 19:46:35.482405 | orchestrator | Reload udev rules ------------------------------------------------------- 0.62s 2025-06-22 19:46:35.484511 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.59s 2025-06-22 19:46:35.484904 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.30s 2025-06-22 19:46:35.485316 | orchestrator | Remove all rook related logical devices --------------------------------- 0.27s 2025-06-22 19:46:37.981136 | orchestrator | Registering Redlock._acquired_script 2025-06-22 19:46:37.981386 | orchestrator | Registering Redlock._extend_script 2025-06-22 19:46:37.981409 | orchestrator | Registering Redlock._release_script 2025-06-22 19:46:38.035355 | orchestrator | 2025-06-22 19:46:38 | INFO  | Task 06b756cb-9cfc-4062-aab8-1ca8611727b9 (facts) was prepared for execution. 2025-06-22 19:46:38.035398 | orchestrator | 2025-06-22 19:46:38 | INFO  | It takes a moment until task 06b756cb-9cfc-4062-aab8-1ca8611727b9 (facts) has been started and output is visible here. 2025-06-22 19:46:42.199589 | orchestrator | 2025-06-22 19:46:42.199698 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-06-22 19:46:42.199714 | orchestrator | 2025-06-22 19:46:42.199834 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-06-22 19:46:42.200370 | orchestrator | Sunday 22 June 2025 19:46:42 +0000 (0:00:00.264) 0:00:00.264 *********** 2025-06-22 19:46:43.326934 | orchestrator | ok: [testbed-manager] 2025-06-22 19:46:43.331708 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:46:43.332673 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:46:43.334207 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:46:43.335024 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:46:43.338434 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:46:43.338466 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:46:43.338481 | orchestrator | 2025-06-22 19:46:43.338501 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-06-22 19:46:43.338920 | orchestrator | Sunday 22 June 2025 19:46:43 +0000 (0:00:01.126) 0:00:01.391 *********** 2025-06-22 19:46:43.505794 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:46:43.590331 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:46:43.675140 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:46:43.754601 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:46:43.833584 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:46:44.599541 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:46:44.602153 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:46:44.605151 | orchestrator | 2025-06-22 19:46:44.611041 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-22 19:46:44.611237 | orchestrator | 2025-06-22 19:46:44.611706 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-22 19:46:44.612358 | orchestrator | Sunday 22 June 2025 19:46:44 +0000 (0:00:01.277) 0:00:02.668 *********** 2025-06-22 19:46:49.920378 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:46:49.921684 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:46:49.924330 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:46:49.924547 | orchestrator | ok: [testbed-manager] 2025-06-22 19:46:49.925372 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:46:49.927511 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:46:49.927720 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:46:49.927931 | orchestrator | 2025-06-22 19:46:49.930487 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-06-22 19:46:49.933668 | orchestrator | 2025-06-22 19:46:49.934110 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-06-22 19:46:49.934471 | orchestrator | Sunday 22 June 2025 19:46:49 +0000 (0:00:05.322) 0:00:07.991 *********** 2025-06-22 19:46:50.083881 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:46:50.165663 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:46:50.257352 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:46:50.339090 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:46:50.418102 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:46:50.478369 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:46:50.480726 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:46:50.480770 | orchestrator | 2025-06-22 19:46:50.480785 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:46:50.480822 | orchestrator | 2025-06-22 19:46:50 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-22 19:46:50.480837 | orchestrator | 2025-06-22 19:46:50 | INFO  | Please wait and do not abort execution. 2025-06-22 19:46:50.481862 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:46:50.484127 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:46:50.486643 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:46:50.486687 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:46:50.486700 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:46:50.486712 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:46:50.488637 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:46:50.489385 | orchestrator | 2025-06-22 19:46:50.491453 | orchestrator | 2025-06-22 19:46:50.494893 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:46:50.495936 | orchestrator | Sunday 22 June 2025 19:46:50 +0000 (0:00:00.556) 0:00:08.547 *********** 2025-06-22 19:46:50.497713 | orchestrator | =============================================================================== 2025-06-22 19:46:50.498869 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.32s 2025-06-22 19:46:50.500032 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.28s 2025-06-22 19:46:50.501392 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.13s 2025-06-22 19:46:50.501948 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.56s 2025-06-22 19:46:52.904405 | orchestrator | 2025-06-22 19:46:52 | INFO  | Task 7afcc67a-3eb9-4889-a3ed-50e48418450e (ceph-configure-lvm-volumes) was prepared for execution. 2025-06-22 19:46:52.904561 | orchestrator | 2025-06-22 19:46:52 | INFO  | It takes a moment until task 7afcc67a-3eb9-4889-a3ed-50e48418450e (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-06-22 19:46:57.230265 | orchestrator | 2025-06-22 19:46:57.231731 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-06-22 19:46:57.233431 | orchestrator | 2025-06-22 19:46:57.234328 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-22 19:46:57.235393 | orchestrator | Sunday 22 June 2025 19:46:57 +0000 (0:00:00.336) 0:00:00.336 *********** 2025-06-22 19:46:57.486311 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-22 19:46:57.488448 | orchestrator | 2025-06-22 19:46:57.489909 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-22 19:46:57.490638 | orchestrator | Sunday 22 June 2025 19:46:57 +0000 (0:00:00.255) 0:00:00.591 *********** 2025-06-22 19:46:57.695926 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:46:57.696069 | orchestrator | 2025-06-22 19:46:57.696491 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:46:57.697880 | orchestrator | Sunday 22 June 2025 19:46:57 +0000 (0:00:00.212) 0:00:00.804 *********** 2025-06-22 19:46:58.045024 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-06-22 19:46:58.045622 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-06-22 19:46:58.046406 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-06-22 19:46:58.047584 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-06-22 19:46:58.047849 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-06-22 19:46:58.048650 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-06-22 19:46:58.048679 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-06-22 19:46:58.049058 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-06-22 19:46:58.049513 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-06-22 19:46:58.050147 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-06-22 19:46:58.050554 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-06-22 19:46:58.051201 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-06-22 19:46:58.051484 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-06-22 19:46:58.051960 | orchestrator | 2025-06-22 19:46:58.052526 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:46:58.052808 | orchestrator | Sunday 22 June 2025 19:46:58 +0000 (0:00:00.350) 0:00:01.154 *********** 2025-06-22 19:46:58.446207 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:46:58.446578 | orchestrator | 2025-06-22 19:46:58.447542 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:46:58.447570 | orchestrator | Sunday 22 June 2025 19:46:58 +0000 (0:00:00.401) 0:00:01.556 *********** 2025-06-22 19:46:58.607890 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:46:58.608962 | orchestrator | 2025-06-22 19:46:58.609816 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:46:58.613553 | orchestrator | Sunday 22 June 2025 19:46:58 +0000 (0:00:00.161) 0:00:01.718 *********** 2025-06-22 19:46:58.800685 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:46:58.802137 | orchestrator | 2025-06-22 19:46:58.803430 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:46:58.804125 | orchestrator | Sunday 22 June 2025 19:46:58 +0000 (0:00:00.192) 0:00:01.910 *********** 2025-06-22 19:46:58.993084 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:46:58.996756 | orchestrator | 2025-06-22 19:46:58.997123 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:46:58.997348 | orchestrator | Sunday 22 June 2025 19:46:58 +0000 (0:00:00.189) 0:00:02.100 *********** 2025-06-22 19:46:59.240425 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:46:59.246577 | orchestrator | 2025-06-22 19:46:59.247469 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:46:59.250003 | orchestrator | Sunday 22 June 2025 19:46:59 +0000 (0:00:00.247) 0:00:02.347 *********** 2025-06-22 19:46:59.407745 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:46:59.409865 | orchestrator | 2025-06-22 19:46:59.411511 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:46:59.413192 | orchestrator | Sunday 22 June 2025 19:46:59 +0000 (0:00:00.170) 0:00:02.517 *********** 2025-06-22 19:46:59.612919 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:46:59.614578 | orchestrator | 2025-06-22 19:46:59.614611 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:46:59.615401 | orchestrator | Sunday 22 June 2025 19:46:59 +0000 (0:00:00.204) 0:00:02.722 *********** 2025-06-22 19:46:59.817734 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:46:59.819185 | orchestrator | 2025-06-22 19:46:59.821445 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:46:59.821462 | orchestrator | Sunday 22 June 2025 19:46:59 +0000 (0:00:00.204) 0:00:02.926 *********** 2025-06-22 19:47:00.271093 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_2156dda8-7e6f-4624-a0c0-e6117c9c49b9) 2025-06-22 19:47:00.271195 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_2156dda8-7e6f-4624-a0c0-e6117c9c49b9) 2025-06-22 19:47:00.271211 | orchestrator | 2025-06-22 19:47:00.271585 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:47:00.271613 | orchestrator | Sunday 22 June 2025 19:47:00 +0000 (0:00:00.452) 0:00:03.379 *********** 2025-06-22 19:47:00.726156 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_78e15a4e-0b6b-4de0-bd2a-417fc55af8a3) 2025-06-22 19:47:00.726404 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_78e15a4e-0b6b-4de0-bd2a-417fc55af8a3) 2025-06-22 19:47:00.728418 | orchestrator | 2025-06-22 19:47:00.729006 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:47:00.730115 | orchestrator | Sunday 22 June 2025 19:47:00 +0000 (0:00:00.455) 0:00:03.834 *********** 2025-06-22 19:47:01.240579 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_0d04e2ba-3abe-44e6-a0ea-4a597e46ae81) 2025-06-22 19:47:01.241708 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_0d04e2ba-3abe-44e6-a0ea-4a597e46ae81) 2025-06-22 19:47:01.242543 | orchestrator | 2025-06-22 19:47:01.244218 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:47:01.244709 | orchestrator | Sunday 22 June 2025 19:47:01 +0000 (0:00:00.514) 0:00:04.349 *********** 2025-06-22 19:47:01.838456 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_67ec265c-9b93-46b0-85f4-348a71cc884e) 2025-06-22 19:47:01.839945 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_67ec265c-9b93-46b0-85f4-348a71cc884e) 2025-06-22 19:47:01.840380 | orchestrator | 2025-06-22 19:47:01.841124 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:47:01.842037 | orchestrator | Sunday 22 June 2025 19:47:01 +0000 (0:00:00.597) 0:00:04.946 *********** 2025-06-22 19:47:02.416506 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-22 19:47:02.417778 | orchestrator | 2025-06-22 19:47:02.419102 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:47:02.420032 | orchestrator | Sunday 22 June 2025 19:47:02 +0000 (0:00:00.578) 0:00:05.524 *********** 2025-06-22 19:47:02.786012 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-06-22 19:47:02.787422 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-06-22 19:47:02.788683 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-06-22 19:47:02.790012 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-06-22 19:47:02.791073 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-06-22 19:47:02.791875 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-06-22 19:47:02.792392 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-06-22 19:47:02.792953 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-06-22 19:47:02.793422 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-06-22 19:47:02.793885 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-06-22 19:47:02.794404 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-06-22 19:47:02.794917 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-06-22 19:47:02.795647 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-06-22 19:47:02.796030 | orchestrator | 2025-06-22 19:47:02.796641 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:47:02.796820 | orchestrator | Sunday 22 June 2025 19:47:02 +0000 (0:00:00.370) 0:00:05.895 *********** 2025-06-22 19:47:02.967899 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:47:02.968843 | orchestrator | 2025-06-22 19:47:02.969446 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:47:02.969920 | orchestrator | Sunday 22 June 2025 19:47:02 +0000 (0:00:00.180) 0:00:06.076 *********** 2025-06-22 19:47:03.168411 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:47:03.170135 | orchestrator | 2025-06-22 19:47:03.171566 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:47:03.171949 | orchestrator | Sunday 22 June 2025 19:47:03 +0000 (0:00:00.201) 0:00:06.277 *********** 2025-06-22 19:47:03.354874 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:47:03.354962 | orchestrator | 2025-06-22 19:47:03.355020 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:47:03.355185 | orchestrator | Sunday 22 June 2025 19:47:03 +0000 (0:00:00.186) 0:00:06.464 *********** 2025-06-22 19:47:03.582678 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:47:03.583688 | orchestrator | 2025-06-22 19:47:03.584639 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:47:03.585045 | orchestrator | Sunday 22 June 2025 19:47:03 +0000 (0:00:00.227) 0:00:06.691 *********** 2025-06-22 19:47:03.772333 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:47:03.774057 | orchestrator | 2025-06-22 19:47:03.778313 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:47:03.778342 | orchestrator | Sunday 22 June 2025 19:47:03 +0000 (0:00:00.190) 0:00:06.881 *********** 2025-06-22 19:47:03.957086 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:47:03.957158 | orchestrator | 2025-06-22 19:47:03.957432 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:47:03.958072 | orchestrator | Sunday 22 June 2025 19:47:03 +0000 (0:00:00.185) 0:00:07.066 *********** 2025-06-22 19:47:04.143140 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:47:04.143524 | orchestrator | 2025-06-22 19:47:04.146574 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:47:04.146606 | orchestrator | Sunday 22 June 2025 19:47:04 +0000 (0:00:00.184) 0:00:07.251 *********** 2025-06-22 19:47:04.320857 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:47:04.321106 | orchestrator | 2025-06-22 19:47:04.322060 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:47:04.325789 | orchestrator | Sunday 22 June 2025 19:47:04 +0000 (0:00:00.178) 0:00:07.430 *********** 2025-06-22 19:47:05.186225 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-06-22 19:47:05.187038 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-06-22 19:47:05.188312 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-06-22 19:47:05.190214 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-06-22 19:47:05.192360 | orchestrator | 2025-06-22 19:47:05.193643 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:47:05.194573 | orchestrator | Sunday 22 June 2025 19:47:05 +0000 (0:00:00.860) 0:00:08.291 *********** 2025-06-22 19:47:05.379071 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:47:05.380191 | orchestrator | 2025-06-22 19:47:05.384592 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:47:05.385097 | orchestrator | Sunday 22 June 2025 19:47:05 +0000 (0:00:00.196) 0:00:08.488 *********** 2025-06-22 19:47:05.563110 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:47:05.563944 | orchestrator | 2025-06-22 19:47:05.565246 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:47:05.566652 | orchestrator | Sunday 22 June 2025 19:47:05 +0000 (0:00:00.183) 0:00:08.672 *********** 2025-06-22 19:47:05.734862 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:47:05.735410 | orchestrator | 2025-06-22 19:47:05.736501 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:47:05.738103 | orchestrator | Sunday 22 June 2025 19:47:05 +0000 (0:00:00.169) 0:00:08.841 *********** 2025-06-22 19:47:05.922639 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:47:05.922853 | orchestrator | 2025-06-22 19:47:05.926416 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-06-22 19:47:05.926477 | orchestrator | Sunday 22 June 2025 19:47:05 +0000 (0:00:00.189) 0:00:09.031 *********** 2025-06-22 19:47:06.069879 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-06-22 19:47:06.070082 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-06-22 19:47:06.070103 | orchestrator | 2025-06-22 19:47:06.070117 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-06-22 19:47:06.070129 | orchestrator | Sunday 22 June 2025 19:47:06 +0000 (0:00:00.146) 0:00:09.178 *********** 2025-06-22 19:47:06.206712 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:47:06.206799 | orchestrator | 2025-06-22 19:47:06.206833 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-06-22 19:47:06.206958 | orchestrator | Sunday 22 June 2025 19:47:06 +0000 (0:00:00.133) 0:00:09.311 *********** 2025-06-22 19:47:06.335344 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:47:06.335428 | orchestrator | 2025-06-22 19:47:06.335443 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-06-22 19:47:06.336455 | orchestrator | Sunday 22 June 2025 19:47:06 +0000 (0:00:00.130) 0:00:09.442 *********** 2025-06-22 19:47:06.458636 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:47:06.459272 | orchestrator | 2025-06-22 19:47:06.461897 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-06-22 19:47:06.462223 | orchestrator | Sunday 22 June 2025 19:47:06 +0000 (0:00:00.123) 0:00:09.566 *********** 2025-06-22 19:47:06.587198 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:47:06.587397 | orchestrator | 2025-06-22 19:47:06.588328 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-06-22 19:47:06.589032 | orchestrator | Sunday 22 June 2025 19:47:06 +0000 (0:00:00.129) 0:00:09.696 *********** 2025-06-22 19:47:06.737761 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ffee4eed-4396-59ea-b922-2a73e3bf4ca0'}}) 2025-06-22 19:47:06.738829 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a67f9737-0c9f-5177-b2d5-f4c811291d8a'}}) 2025-06-22 19:47:06.739909 | orchestrator | 2025-06-22 19:47:06.741280 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-06-22 19:47:06.741317 | orchestrator | Sunday 22 June 2025 19:47:06 +0000 (0:00:00.150) 0:00:09.846 *********** 2025-06-22 19:47:06.868376 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ffee4eed-4396-59ea-b922-2a73e3bf4ca0'}})  2025-06-22 19:47:06.869786 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a67f9737-0c9f-5177-b2d5-f4c811291d8a'}})  2025-06-22 19:47:06.871249 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:47:06.872140 | orchestrator | 2025-06-22 19:47:06.873300 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-06-22 19:47:06.874374 | orchestrator | Sunday 22 June 2025 19:47:06 +0000 (0:00:00.130) 0:00:09.977 *********** 2025-06-22 19:47:07.146579 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ffee4eed-4396-59ea-b922-2a73e3bf4ca0'}})  2025-06-22 19:47:07.148131 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a67f9737-0c9f-5177-b2d5-f4c811291d8a'}})  2025-06-22 19:47:07.149337 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:47:07.150917 | orchestrator | 2025-06-22 19:47:07.153733 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-06-22 19:47:07.155149 | orchestrator | Sunday 22 June 2025 19:47:07 +0000 (0:00:00.278) 0:00:10.256 *********** 2025-06-22 19:47:07.297009 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ffee4eed-4396-59ea-b922-2a73e3bf4ca0'}})  2025-06-22 19:47:07.299381 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a67f9737-0c9f-5177-b2d5-f4c811291d8a'}})  2025-06-22 19:47:07.299929 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:47:07.300914 | orchestrator | 2025-06-22 19:47:07.301764 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-06-22 19:47:07.302603 | orchestrator | Sunday 22 June 2025 19:47:07 +0000 (0:00:00.149) 0:00:10.405 *********** 2025-06-22 19:47:07.418683 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:47:07.418899 | orchestrator | 2025-06-22 19:47:07.418921 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-06-22 19:47:07.420343 | orchestrator | Sunday 22 June 2025 19:47:07 +0000 (0:00:00.119) 0:00:10.525 *********** 2025-06-22 19:47:07.538153 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:47:07.538253 | orchestrator | 2025-06-22 19:47:07.538587 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-06-22 19:47:07.538846 | orchestrator | Sunday 22 June 2025 19:47:07 +0000 (0:00:00.120) 0:00:10.645 *********** 2025-06-22 19:47:07.668386 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:47:07.671499 | orchestrator | 2025-06-22 19:47:07.671532 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-06-22 19:47:07.672570 | orchestrator | Sunday 22 June 2025 19:47:07 +0000 (0:00:00.131) 0:00:10.777 *********** 2025-06-22 19:47:07.795859 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:47:07.796043 | orchestrator | 2025-06-22 19:47:07.797120 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-06-22 19:47:07.800572 | orchestrator | Sunday 22 June 2025 19:47:07 +0000 (0:00:00.127) 0:00:10.904 *********** 2025-06-22 19:47:07.924458 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:47:07.924548 | orchestrator | 2025-06-22 19:47:07.926293 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-06-22 19:47:07.926344 | orchestrator | Sunday 22 June 2025 19:47:07 +0000 (0:00:00.126) 0:00:11.031 *********** 2025-06-22 19:47:08.028457 | orchestrator | ok: [testbed-node-3] => { 2025-06-22 19:47:08.029783 | orchestrator |  "ceph_osd_devices": { 2025-06-22 19:47:08.030886 | orchestrator |  "sdb": { 2025-06-22 19:47:08.032865 | orchestrator |  "osd_lvm_uuid": "ffee4eed-4396-59ea-b922-2a73e3bf4ca0" 2025-06-22 19:47:08.033076 | orchestrator |  }, 2025-06-22 19:47:08.033851 | orchestrator |  "sdc": { 2025-06-22 19:47:08.034773 | orchestrator |  "osd_lvm_uuid": "a67f9737-0c9f-5177-b2d5-f4c811291d8a" 2025-06-22 19:47:08.035861 | orchestrator |  } 2025-06-22 19:47:08.036841 | orchestrator |  } 2025-06-22 19:47:08.038065 | orchestrator | } 2025-06-22 19:47:08.038598 | orchestrator | 2025-06-22 19:47:08.039368 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-06-22 19:47:08.039697 | orchestrator | Sunday 22 June 2025 19:47:08 +0000 (0:00:00.106) 0:00:11.138 *********** 2025-06-22 19:47:08.157815 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:47:08.160675 | orchestrator | 2025-06-22 19:47:08.161463 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-06-22 19:47:08.162366 | orchestrator | Sunday 22 June 2025 19:47:08 +0000 (0:00:00.128) 0:00:11.267 *********** 2025-06-22 19:47:08.284821 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:47:08.286516 | orchestrator | 2025-06-22 19:47:08.287225 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-06-22 19:47:08.288762 | orchestrator | Sunday 22 June 2025 19:47:08 +0000 (0:00:00.126) 0:00:11.394 *********** 2025-06-22 19:47:08.407571 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:47:08.408125 | orchestrator | 2025-06-22 19:47:08.408622 | orchestrator | TASK [Print configuration data] ************************************************ 2025-06-22 19:47:08.409495 | orchestrator | Sunday 22 June 2025 19:47:08 +0000 (0:00:00.123) 0:00:11.517 *********** 2025-06-22 19:47:08.589743 | orchestrator | changed: [testbed-node-3] => { 2025-06-22 19:47:08.591052 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-06-22 19:47:08.592159 | orchestrator |  "ceph_osd_devices": { 2025-06-22 19:47:08.593444 | orchestrator |  "sdb": { 2025-06-22 19:47:08.594581 | orchestrator |  "osd_lvm_uuid": "ffee4eed-4396-59ea-b922-2a73e3bf4ca0" 2025-06-22 19:47:08.595431 | orchestrator |  }, 2025-06-22 19:47:08.596787 | orchestrator |  "sdc": { 2025-06-22 19:47:08.597838 | orchestrator |  "osd_lvm_uuid": "a67f9737-0c9f-5177-b2d5-f4c811291d8a" 2025-06-22 19:47:08.598965 | orchestrator |  } 2025-06-22 19:47:08.599827 | orchestrator |  }, 2025-06-22 19:47:08.600703 | orchestrator |  "lvm_volumes": [ 2025-06-22 19:47:08.601338 | orchestrator |  { 2025-06-22 19:47:08.601958 | orchestrator |  "data": "osd-block-ffee4eed-4396-59ea-b922-2a73e3bf4ca0", 2025-06-22 19:47:08.602782 | orchestrator |  "data_vg": "ceph-ffee4eed-4396-59ea-b922-2a73e3bf4ca0" 2025-06-22 19:47:08.603465 | orchestrator |  }, 2025-06-22 19:47:08.604029 | orchestrator |  { 2025-06-22 19:47:08.604673 | orchestrator |  "data": "osd-block-a67f9737-0c9f-5177-b2d5-f4c811291d8a", 2025-06-22 19:47:08.605378 | orchestrator |  "data_vg": "ceph-a67f9737-0c9f-5177-b2d5-f4c811291d8a" 2025-06-22 19:47:08.605860 | orchestrator |  } 2025-06-22 19:47:08.606531 | orchestrator |  ] 2025-06-22 19:47:08.606960 | orchestrator |  } 2025-06-22 19:47:08.608076 | orchestrator | } 2025-06-22 19:47:08.608108 | orchestrator | 2025-06-22 19:47:08.608120 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-06-22 19:47:08.609330 | orchestrator | Sunday 22 June 2025 19:47:08 +0000 (0:00:00.180) 0:00:11.698 *********** 2025-06-22 19:47:10.519093 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-22 19:47:10.519188 | orchestrator | 2025-06-22 19:47:10.519204 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-06-22 19:47:10.519273 | orchestrator | 2025-06-22 19:47:10.519494 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-22 19:47:10.522590 | orchestrator | Sunday 22 June 2025 19:47:10 +0000 (0:00:01.930) 0:00:13.628 *********** 2025-06-22 19:47:10.759830 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-06-22 19:47:10.759936 | orchestrator | 2025-06-22 19:47:10.760700 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-22 19:47:10.761585 | orchestrator | Sunday 22 June 2025 19:47:10 +0000 (0:00:00.241) 0:00:13.869 *********** 2025-06-22 19:47:10.995646 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:47:10.995791 | orchestrator | 2025-06-22 19:47:10.996145 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:47:10.996777 | orchestrator | Sunday 22 June 2025 19:47:10 +0000 (0:00:00.236) 0:00:14.106 *********** 2025-06-22 19:47:11.402766 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-06-22 19:47:11.406583 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-06-22 19:47:11.408653 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-06-22 19:47:11.408977 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-06-22 19:47:11.410898 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-06-22 19:47:11.412738 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-06-22 19:47:11.412762 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-06-22 19:47:11.413849 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-06-22 19:47:11.414181 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-06-22 19:47:11.415845 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-06-22 19:47:11.416737 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-06-22 19:47:11.417754 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-06-22 19:47:11.418549 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-06-22 19:47:11.419349 | orchestrator | 2025-06-22 19:47:11.419859 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:47:11.420775 | orchestrator | Sunday 22 June 2025 19:47:11 +0000 (0:00:00.404) 0:00:14.510 *********** 2025-06-22 19:47:11.592700 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:47:11.594589 | orchestrator | 2025-06-22 19:47:11.594641 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:47:11.595354 | orchestrator | Sunday 22 June 2025 19:47:11 +0000 (0:00:00.190) 0:00:14.700 *********** 2025-06-22 19:47:11.801545 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:47:11.803181 | orchestrator | 2025-06-22 19:47:11.804472 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:47:11.805421 | orchestrator | Sunday 22 June 2025 19:47:11 +0000 (0:00:00.210) 0:00:14.911 *********** 2025-06-22 19:47:12.007571 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:47:12.008871 | orchestrator | 2025-06-22 19:47:12.010541 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:47:12.015330 | orchestrator | Sunday 22 June 2025 19:47:11 +0000 (0:00:00.204) 0:00:15.115 *********** 2025-06-22 19:47:12.213175 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:47:12.213279 | orchestrator | 2025-06-22 19:47:12.215249 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:47:12.216377 | orchestrator | Sunday 22 June 2025 19:47:12 +0000 (0:00:00.205) 0:00:15.321 *********** 2025-06-22 19:47:12.812735 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:47:12.812966 | orchestrator | 2025-06-22 19:47:12.816927 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:47:12.817387 | orchestrator | Sunday 22 June 2025 19:47:12 +0000 (0:00:00.596) 0:00:15.918 *********** 2025-06-22 19:47:13.013568 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:47:13.013699 | orchestrator | 2025-06-22 19:47:13.015604 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:47:13.016659 | orchestrator | Sunday 22 June 2025 19:47:13 +0000 (0:00:00.205) 0:00:16.123 *********** 2025-06-22 19:47:13.209498 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:47:13.209662 | orchestrator | 2025-06-22 19:47:13.212188 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:47:13.214624 | orchestrator | Sunday 22 June 2025 19:47:13 +0000 (0:00:00.195) 0:00:16.318 *********** 2025-06-22 19:47:13.410794 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:47:13.412078 | orchestrator | 2025-06-22 19:47:13.414919 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:47:13.415204 | orchestrator | Sunday 22 June 2025 19:47:13 +0000 (0:00:00.199) 0:00:16.518 *********** 2025-06-22 19:47:13.837856 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_81f2e499-4268-4bd5-a5ff-46d49ba2fab9) 2025-06-22 19:47:13.838117 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_81f2e499-4268-4bd5-a5ff-46d49ba2fab9) 2025-06-22 19:47:13.838814 | orchestrator | 2025-06-22 19:47:13.838839 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:47:13.841742 | orchestrator | Sunday 22 June 2025 19:47:13 +0000 (0:00:00.426) 0:00:16.945 *********** 2025-06-22 19:47:14.276771 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_1702d6d9-f6d5-467e-9c44-3c93c3ac891d) 2025-06-22 19:47:14.281463 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_1702d6d9-f6d5-467e-9c44-3c93c3ac891d) 2025-06-22 19:47:14.282691 | orchestrator | 2025-06-22 19:47:14.284299 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:47:14.284772 | orchestrator | Sunday 22 June 2025 19:47:14 +0000 (0:00:00.440) 0:00:17.386 *********** 2025-06-22 19:47:14.687307 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_a49b6e77-acd0-4f36-887b-4e4ec75cdfa4) 2025-06-22 19:47:14.688466 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_a49b6e77-acd0-4f36-887b-4e4ec75cdfa4) 2025-06-22 19:47:14.690709 | orchestrator | 2025-06-22 19:47:14.691648 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:47:14.693687 | orchestrator | Sunday 22 June 2025 19:47:14 +0000 (0:00:00.407) 0:00:17.793 *********** 2025-06-22 19:47:15.148138 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_bbdef6ad-891d-4656-ac9b-bc24d19b561e) 2025-06-22 19:47:15.149899 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_bbdef6ad-891d-4656-ac9b-bc24d19b561e) 2025-06-22 19:47:15.149929 | orchestrator | 2025-06-22 19:47:15.151113 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:47:15.152119 | orchestrator | Sunday 22 June 2025 19:47:15 +0000 (0:00:00.461) 0:00:18.254 *********** 2025-06-22 19:47:15.498599 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-22 19:47:15.500423 | orchestrator | 2025-06-22 19:47:15.503681 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:47:15.503717 | orchestrator | Sunday 22 June 2025 19:47:15 +0000 (0:00:00.352) 0:00:18.607 *********** 2025-06-22 19:47:15.865722 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-06-22 19:47:15.866117 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-06-22 19:47:15.867376 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-06-22 19:47:15.869469 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-06-22 19:47:15.869481 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-06-22 19:47:15.870181 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-06-22 19:47:15.871149 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-06-22 19:47:15.871649 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-06-22 19:47:15.872302 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-06-22 19:47:15.872696 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-06-22 19:47:15.873483 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-06-22 19:47:15.873840 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-06-22 19:47:15.874742 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-06-22 19:47:15.875274 | orchestrator | 2025-06-22 19:47:15.875912 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:47:15.876457 | orchestrator | Sunday 22 June 2025 19:47:15 +0000 (0:00:00.362) 0:00:18.969 *********** 2025-06-22 19:47:16.058565 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:47:16.060150 | orchestrator | 2025-06-22 19:47:16.063570 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:47:16.063596 | orchestrator | Sunday 22 June 2025 19:47:16 +0000 (0:00:00.197) 0:00:19.166 *********** 2025-06-22 19:47:16.702240 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:47:16.706730 | orchestrator | 2025-06-22 19:47:16.707497 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:47:16.708699 | orchestrator | Sunday 22 June 2025 19:47:16 +0000 (0:00:00.642) 0:00:19.808 *********** 2025-06-22 19:47:16.904519 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:47:16.905263 | orchestrator | 2025-06-22 19:47:16.906217 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:47:16.907879 | orchestrator | Sunday 22 June 2025 19:47:16 +0000 (0:00:00.204) 0:00:20.013 *********** 2025-06-22 19:47:17.121370 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:47:17.121501 | orchestrator | 2025-06-22 19:47:17.123634 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:47:17.125474 | orchestrator | Sunday 22 June 2025 19:47:17 +0000 (0:00:00.215) 0:00:20.229 *********** 2025-06-22 19:47:17.322870 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:47:17.323869 | orchestrator | 2025-06-22 19:47:17.325070 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:47:17.325449 | orchestrator | Sunday 22 June 2025 19:47:17 +0000 (0:00:00.203) 0:00:20.432 *********** 2025-06-22 19:47:17.520133 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:47:17.521531 | orchestrator | 2025-06-22 19:47:17.522359 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:47:17.523719 | orchestrator | Sunday 22 June 2025 19:47:17 +0000 (0:00:00.197) 0:00:20.629 *********** 2025-06-22 19:47:17.739165 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:47:17.744288 | orchestrator | 2025-06-22 19:47:17.744976 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:47:17.746683 | orchestrator | Sunday 22 June 2025 19:47:17 +0000 (0:00:00.215) 0:00:20.845 *********** 2025-06-22 19:47:17.941750 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:47:17.942365 | orchestrator | 2025-06-22 19:47:17.943801 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:47:17.944164 | orchestrator | Sunday 22 June 2025 19:47:17 +0000 (0:00:00.203) 0:00:21.048 *********** 2025-06-22 19:47:18.617074 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-06-22 19:47:18.618622 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-06-22 19:47:18.620107 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-06-22 19:47:18.620414 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-06-22 19:47:18.621743 | orchestrator | 2025-06-22 19:47:18.625065 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:47:18.626575 | orchestrator | Sunday 22 June 2025 19:47:18 +0000 (0:00:00.677) 0:00:21.726 *********** 2025-06-22 19:47:18.819119 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:47:18.819925 | orchestrator | 2025-06-22 19:47:18.824957 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:47:18.825907 | orchestrator | Sunday 22 June 2025 19:47:18 +0000 (0:00:00.200) 0:00:21.926 *********** 2025-06-22 19:47:19.059557 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:47:19.061572 | orchestrator | 2025-06-22 19:47:19.065308 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:47:19.067097 | orchestrator | Sunday 22 June 2025 19:47:19 +0000 (0:00:00.239) 0:00:22.166 *********** 2025-06-22 19:47:19.256189 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:47:19.256871 | orchestrator | 2025-06-22 19:47:19.261593 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:47:19.262769 | orchestrator | Sunday 22 June 2025 19:47:19 +0000 (0:00:00.197) 0:00:22.364 *********** 2025-06-22 19:47:19.468306 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:47:19.468407 | orchestrator | 2025-06-22 19:47:19.469202 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-06-22 19:47:19.469521 | orchestrator | Sunday 22 June 2025 19:47:19 +0000 (0:00:00.210) 0:00:22.574 *********** 2025-06-22 19:47:19.838118 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-06-22 19:47:19.838710 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-06-22 19:47:19.842876 | orchestrator | 2025-06-22 19:47:19.844319 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-06-22 19:47:19.846317 | orchestrator | Sunday 22 June 2025 19:47:19 +0000 (0:00:00.370) 0:00:22.944 *********** 2025-06-22 19:47:19.975362 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:47:19.978454 | orchestrator | 2025-06-22 19:47:19.979457 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-06-22 19:47:19.980609 | orchestrator | Sunday 22 June 2025 19:47:19 +0000 (0:00:00.137) 0:00:23.082 *********** 2025-06-22 19:47:20.111475 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:47:20.112660 | orchestrator | 2025-06-22 19:47:20.113727 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-06-22 19:47:20.117280 | orchestrator | Sunday 22 June 2025 19:47:20 +0000 (0:00:00.138) 0:00:23.220 *********** 2025-06-22 19:47:20.256598 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:47:20.258167 | orchestrator | 2025-06-22 19:47:20.258749 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-06-22 19:47:20.259833 | orchestrator | Sunday 22 June 2025 19:47:20 +0000 (0:00:00.144) 0:00:23.365 *********** 2025-06-22 19:47:20.413463 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:47:20.414724 | orchestrator | 2025-06-22 19:47:20.416094 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-06-22 19:47:20.417129 | orchestrator | Sunday 22 June 2025 19:47:20 +0000 (0:00:00.156) 0:00:23.522 *********** 2025-06-22 19:47:20.587930 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '420ac1c2-ff56-5c56-8dd6-abe068aa03ad'}}) 2025-06-22 19:47:20.589721 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '21b37dc5-48e7-5a6c-9835-121dab35d047'}}) 2025-06-22 19:47:20.591467 | orchestrator | 2025-06-22 19:47:20.592924 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-06-22 19:47:20.593770 | orchestrator | Sunday 22 June 2025 19:47:20 +0000 (0:00:00.175) 0:00:23.697 *********** 2025-06-22 19:47:20.747348 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '420ac1c2-ff56-5c56-8dd6-abe068aa03ad'}})  2025-06-22 19:47:20.749129 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '21b37dc5-48e7-5a6c-9835-121dab35d047'}})  2025-06-22 19:47:20.752061 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:47:20.753534 | orchestrator | 2025-06-22 19:47:20.753756 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-06-22 19:47:20.754966 | orchestrator | Sunday 22 June 2025 19:47:20 +0000 (0:00:00.157) 0:00:23.854 *********** 2025-06-22 19:47:20.925395 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '420ac1c2-ff56-5c56-8dd6-abe068aa03ad'}})  2025-06-22 19:47:20.931309 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '21b37dc5-48e7-5a6c-9835-121dab35d047'}})  2025-06-22 19:47:20.931344 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:47:20.933675 | orchestrator | 2025-06-22 19:47:20.934187 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-06-22 19:47:20.934437 | orchestrator | Sunday 22 June 2025 19:47:20 +0000 (0:00:00.179) 0:00:24.034 *********** 2025-06-22 19:47:21.080652 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '420ac1c2-ff56-5c56-8dd6-abe068aa03ad'}})  2025-06-22 19:47:21.080850 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '21b37dc5-48e7-5a6c-9835-121dab35d047'}})  2025-06-22 19:47:21.081452 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:47:21.081724 | orchestrator | 2025-06-22 19:47:21.081984 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-06-22 19:47:21.082211 | orchestrator | Sunday 22 June 2025 19:47:21 +0000 (0:00:00.151) 0:00:24.185 *********** 2025-06-22 19:47:21.215951 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:47:21.216115 | orchestrator | 2025-06-22 19:47:21.216132 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-06-22 19:47:21.216857 | orchestrator | Sunday 22 June 2025 19:47:21 +0000 (0:00:00.135) 0:00:24.321 *********** 2025-06-22 19:47:21.356143 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:47:21.361808 | orchestrator | 2025-06-22 19:47:21.365600 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-06-22 19:47:21.367535 | orchestrator | Sunday 22 June 2025 19:47:21 +0000 (0:00:00.143) 0:00:24.464 *********** 2025-06-22 19:47:21.501167 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:47:21.501761 | orchestrator | 2025-06-22 19:47:21.504330 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-06-22 19:47:21.504888 | orchestrator | Sunday 22 June 2025 19:47:21 +0000 (0:00:00.142) 0:00:24.607 *********** 2025-06-22 19:47:21.842387 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:47:21.842583 | orchestrator | 2025-06-22 19:47:21.842870 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-06-22 19:47:21.843257 | orchestrator | Sunday 22 June 2025 19:47:21 +0000 (0:00:00.344) 0:00:24.951 *********** 2025-06-22 19:47:21.983107 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:47:21.983205 | orchestrator | 2025-06-22 19:47:21.983957 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-06-22 19:47:21.984453 | orchestrator | Sunday 22 June 2025 19:47:21 +0000 (0:00:00.139) 0:00:25.091 *********** 2025-06-22 19:47:22.129854 | orchestrator | ok: [testbed-node-4] => { 2025-06-22 19:47:22.131898 | orchestrator |  "ceph_osd_devices": { 2025-06-22 19:47:22.133854 | orchestrator |  "sdb": { 2025-06-22 19:47:22.137936 | orchestrator |  "osd_lvm_uuid": "420ac1c2-ff56-5c56-8dd6-abe068aa03ad" 2025-06-22 19:47:22.138784 | orchestrator |  }, 2025-06-22 19:47:22.139534 | orchestrator |  "sdc": { 2025-06-22 19:47:22.140238 | orchestrator |  "osd_lvm_uuid": "21b37dc5-48e7-5a6c-9835-121dab35d047" 2025-06-22 19:47:22.144157 | orchestrator |  } 2025-06-22 19:47:22.144953 | orchestrator |  } 2025-06-22 19:47:22.145765 | orchestrator | } 2025-06-22 19:47:22.146145 | orchestrator | 2025-06-22 19:47:22.146997 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-06-22 19:47:22.147574 | orchestrator | Sunday 22 June 2025 19:47:22 +0000 (0:00:00.146) 0:00:25.238 *********** 2025-06-22 19:47:22.277519 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:47:22.277804 | orchestrator | 2025-06-22 19:47:22.278916 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-06-22 19:47:22.279484 | orchestrator | Sunday 22 June 2025 19:47:22 +0000 (0:00:00.148) 0:00:25.386 *********** 2025-06-22 19:47:22.410861 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:47:22.410970 | orchestrator | 2025-06-22 19:47:22.410991 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-06-22 19:47:22.411216 | orchestrator | Sunday 22 June 2025 19:47:22 +0000 (0:00:00.132) 0:00:25.518 *********** 2025-06-22 19:47:22.552324 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:47:22.553833 | orchestrator | 2025-06-22 19:47:22.557420 | orchestrator | TASK [Print configuration data] ************************************************ 2025-06-22 19:47:22.558495 | orchestrator | Sunday 22 June 2025 19:47:22 +0000 (0:00:00.141) 0:00:25.660 *********** 2025-06-22 19:47:22.765600 | orchestrator | changed: [testbed-node-4] => { 2025-06-22 19:47:22.768581 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-06-22 19:47:22.769860 | orchestrator |  "ceph_osd_devices": { 2025-06-22 19:47:22.771443 | orchestrator |  "sdb": { 2025-06-22 19:47:22.773668 | orchestrator |  "osd_lvm_uuid": "420ac1c2-ff56-5c56-8dd6-abe068aa03ad" 2025-06-22 19:47:22.775697 | orchestrator |  }, 2025-06-22 19:47:22.778156 | orchestrator |  "sdc": { 2025-06-22 19:47:22.779497 | orchestrator |  "osd_lvm_uuid": "21b37dc5-48e7-5a6c-9835-121dab35d047" 2025-06-22 19:47:22.781187 | orchestrator |  } 2025-06-22 19:47:22.783759 | orchestrator |  }, 2025-06-22 19:47:22.785737 | orchestrator |  "lvm_volumes": [ 2025-06-22 19:47:22.787204 | orchestrator |  { 2025-06-22 19:47:22.788467 | orchestrator |  "data": "osd-block-420ac1c2-ff56-5c56-8dd6-abe068aa03ad", 2025-06-22 19:47:22.789884 | orchestrator |  "data_vg": "ceph-420ac1c2-ff56-5c56-8dd6-abe068aa03ad" 2025-06-22 19:47:22.790776 | orchestrator |  }, 2025-06-22 19:47:22.792103 | orchestrator |  { 2025-06-22 19:47:22.792349 | orchestrator |  "data": "osd-block-21b37dc5-48e7-5a6c-9835-121dab35d047", 2025-06-22 19:47:22.793400 | orchestrator |  "data_vg": "ceph-21b37dc5-48e7-5a6c-9835-121dab35d047" 2025-06-22 19:47:22.793913 | orchestrator |  } 2025-06-22 19:47:22.795169 | orchestrator |  ] 2025-06-22 19:47:22.796157 | orchestrator |  } 2025-06-22 19:47:22.796835 | orchestrator | } 2025-06-22 19:47:22.798706 | orchestrator | 2025-06-22 19:47:22.800504 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-06-22 19:47:22.801401 | orchestrator | Sunday 22 June 2025 19:47:22 +0000 (0:00:00.212) 0:00:25.872 *********** 2025-06-22 19:47:23.906475 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-06-22 19:47:23.914313 | orchestrator | 2025-06-22 19:47:23.914416 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-06-22 19:47:23.914492 | orchestrator | 2025-06-22 19:47:23.915704 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-22 19:47:23.916435 | orchestrator | Sunday 22 June 2025 19:47:23 +0000 (0:00:01.140) 0:00:27.013 *********** 2025-06-22 19:47:24.403518 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-06-22 19:47:24.405144 | orchestrator | 2025-06-22 19:47:24.409930 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-22 19:47:24.410549 | orchestrator | Sunday 22 June 2025 19:47:24 +0000 (0:00:00.497) 0:00:27.511 *********** 2025-06-22 19:47:25.113234 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:47:25.114122 | orchestrator | 2025-06-22 19:47:25.117071 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:47:25.119834 | orchestrator | Sunday 22 June 2025 19:47:25 +0000 (0:00:00.708) 0:00:28.220 *********** 2025-06-22 19:47:25.498479 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-06-22 19:47:25.501880 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-06-22 19:47:25.502811 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-06-22 19:47:25.504457 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-06-22 19:47:25.505445 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-06-22 19:47:25.506544 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-06-22 19:47:25.507636 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-06-22 19:47:25.509151 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-06-22 19:47:25.511546 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-06-22 19:47:25.511582 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-06-22 19:47:25.511589 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-06-22 19:47:25.512158 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-06-22 19:47:25.513058 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-06-22 19:47:25.513580 | orchestrator | 2025-06-22 19:47:25.514136 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:47:25.514845 | orchestrator | Sunday 22 June 2025 19:47:25 +0000 (0:00:00.385) 0:00:28.605 *********** 2025-06-22 19:47:25.750758 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:25.756978 | orchestrator | 2025-06-22 19:47:25.757063 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:47:25.757070 | orchestrator | Sunday 22 June 2025 19:47:25 +0000 (0:00:00.254) 0:00:28.859 *********** 2025-06-22 19:47:25.960871 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:25.961970 | orchestrator | 2025-06-22 19:47:25.963700 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:47:25.965076 | orchestrator | Sunday 22 June 2025 19:47:25 +0000 (0:00:00.209) 0:00:29.069 *********** 2025-06-22 19:47:26.166252 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:26.168480 | orchestrator | 2025-06-22 19:47:26.171564 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:47:26.171612 | orchestrator | Sunday 22 June 2025 19:47:26 +0000 (0:00:00.204) 0:00:29.274 *********** 2025-06-22 19:47:26.355103 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:26.355327 | orchestrator | 2025-06-22 19:47:26.356498 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:47:26.357555 | orchestrator | Sunday 22 June 2025 19:47:26 +0000 (0:00:00.188) 0:00:29.462 *********** 2025-06-22 19:47:26.583550 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:26.583645 | orchestrator | 2025-06-22 19:47:26.585185 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:47:26.585213 | orchestrator | Sunday 22 June 2025 19:47:26 +0000 (0:00:00.229) 0:00:29.691 *********** 2025-06-22 19:47:26.810692 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:26.812197 | orchestrator | 2025-06-22 19:47:26.815161 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:47:26.815339 | orchestrator | Sunday 22 June 2025 19:47:26 +0000 (0:00:00.225) 0:00:29.917 *********** 2025-06-22 19:47:26.992129 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:26.993858 | orchestrator | 2025-06-22 19:47:26.994676 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:47:26.995882 | orchestrator | Sunday 22 June 2025 19:47:26 +0000 (0:00:00.183) 0:00:30.101 *********** 2025-06-22 19:47:27.191355 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:27.193395 | orchestrator | 2025-06-22 19:47:27.194719 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:47:27.195410 | orchestrator | Sunday 22 June 2025 19:47:27 +0000 (0:00:00.199) 0:00:30.300 *********** 2025-06-22 19:47:27.853926 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_abc8cf8e-645f-44ba-8ef9-2fedd7dd22d1) 2025-06-22 19:47:27.854299 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_abc8cf8e-645f-44ba-8ef9-2fedd7dd22d1) 2025-06-22 19:47:27.854803 | orchestrator | 2025-06-22 19:47:27.855092 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:47:27.855467 | orchestrator | Sunday 22 June 2025 19:47:27 +0000 (0:00:00.655) 0:00:30.956 *********** 2025-06-22 19:47:28.694552 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_b25991b3-37fd-407a-b13b-d136271ca727) 2025-06-22 19:47:28.695954 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_b25991b3-37fd-407a-b13b-d136271ca727) 2025-06-22 19:47:28.697762 | orchestrator | 2025-06-22 19:47:28.700096 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:47:28.700197 | orchestrator | Sunday 22 June 2025 19:47:28 +0000 (0:00:00.847) 0:00:31.803 *********** 2025-06-22 19:47:29.104217 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_71e43d47-057b-4609-853f-9ccf72c5a295) 2025-06-22 19:47:29.104375 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_71e43d47-057b-4609-853f-9ccf72c5a295) 2025-06-22 19:47:29.104771 | orchestrator | 2025-06-22 19:47:29.105408 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:47:29.105971 | orchestrator | Sunday 22 June 2025 19:47:29 +0000 (0:00:00.409) 0:00:32.212 *********** 2025-06-22 19:47:29.484725 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_61868cbd-84da-463e-9017-284301fda41a) 2025-06-22 19:47:29.486105 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_61868cbd-84da-463e-9017-284301fda41a) 2025-06-22 19:47:29.486798 | orchestrator | 2025-06-22 19:47:29.487389 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:47:29.487989 | orchestrator | Sunday 22 June 2025 19:47:29 +0000 (0:00:00.381) 0:00:32.594 *********** 2025-06-22 19:47:29.792912 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-22 19:47:29.792997 | orchestrator | 2025-06-22 19:47:29.793038 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:47:29.793059 | orchestrator | Sunday 22 June 2025 19:47:29 +0000 (0:00:00.306) 0:00:32.900 *********** 2025-06-22 19:47:30.131847 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-06-22 19:47:30.133263 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-06-22 19:47:30.135109 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-06-22 19:47:30.137063 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-06-22 19:47:30.137975 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-06-22 19:47:30.138925 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-06-22 19:47:30.140105 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-06-22 19:47:30.140916 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-06-22 19:47:30.141684 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-06-22 19:47:30.142172 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-06-22 19:47:30.142967 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-06-22 19:47:30.143718 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-06-22 19:47:30.144173 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-06-22 19:47:30.144947 | orchestrator | 2025-06-22 19:47:30.145518 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:47:30.146195 | orchestrator | Sunday 22 June 2025 19:47:30 +0000 (0:00:00.338) 0:00:33.238 *********** 2025-06-22 19:47:30.302241 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:30.302964 | orchestrator | 2025-06-22 19:47:30.303822 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:47:30.304489 | orchestrator | Sunday 22 June 2025 19:47:30 +0000 (0:00:00.172) 0:00:33.410 *********** 2025-06-22 19:47:30.484009 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:30.484936 | orchestrator | 2025-06-22 19:47:30.486302 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:47:30.487098 | orchestrator | Sunday 22 June 2025 19:47:30 +0000 (0:00:00.182) 0:00:33.593 *********** 2025-06-22 19:47:30.671618 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:30.672397 | orchestrator | 2025-06-22 19:47:30.673176 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:47:30.673944 | orchestrator | Sunday 22 June 2025 19:47:30 +0000 (0:00:00.187) 0:00:33.781 *********** 2025-06-22 19:47:30.858801 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:30.860584 | orchestrator | 2025-06-22 19:47:30.860619 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:47:30.861118 | orchestrator | Sunday 22 June 2025 19:47:30 +0000 (0:00:00.186) 0:00:33.968 *********** 2025-06-22 19:47:31.045202 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:31.045968 | orchestrator | 2025-06-22 19:47:31.047451 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:47:31.047796 | orchestrator | Sunday 22 June 2025 19:47:31 +0000 (0:00:00.186) 0:00:34.154 *********** 2025-06-22 19:47:31.523078 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:31.523267 | orchestrator | 2025-06-22 19:47:31.524069 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:47:31.525617 | orchestrator | Sunday 22 June 2025 19:47:31 +0000 (0:00:00.477) 0:00:34.632 *********** 2025-06-22 19:47:31.713848 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:31.714516 | orchestrator | 2025-06-22 19:47:31.715460 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:47:31.716922 | orchestrator | Sunday 22 June 2025 19:47:31 +0000 (0:00:00.192) 0:00:34.824 *********** 2025-06-22 19:47:31.922333 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:31.923397 | orchestrator | 2025-06-22 19:47:31.923667 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:47:31.924482 | orchestrator | Sunday 22 June 2025 19:47:31 +0000 (0:00:00.206) 0:00:35.030 *********** 2025-06-22 19:47:32.486931 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-06-22 19:47:32.487518 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-06-22 19:47:32.489084 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-06-22 19:47:32.490648 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-06-22 19:47:32.491133 | orchestrator | 2025-06-22 19:47:32.491602 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:47:32.491946 | orchestrator | Sunday 22 June 2025 19:47:32 +0000 (0:00:00.566) 0:00:35.596 *********** 2025-06-22 19:47:32.689492 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:32.690173 | orchestrator | 2025-06-22 19:47:32.691216 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:47:32.692606 | orchestrator | Sunday 22 June 2025 19:47:32 +0000 (0:00:00.202) 0:00:35.799 *********** 2025-06-22 19:47:32.882359 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:32.883467 | orchestrator | 2025-06-22 19:47:32.883599 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:47:32.884791 | orchestrator | Sunday 22 June 2025 19:47:32 +0000 (0:00:00.191) 0:00:35.990 *********** 2025-06-22 19:47:33.079661 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:33.080782 | orchestrator | 2025-06-22 19:47:33.081945 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:47:33.082324 | orchestrator | Sunday 22 June 2025 19:47:33 +0000 (0:00:00.198) 0:00:36.189 *********** 2025-06-22 19:47:33.322370 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:33.322918 | orchestrator | 2025-06-22 19:47:33.325207 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-06-22 19:47:33.325920 | orchestrator | Sunday 22 June 2025 19:47:33 +0000 (0:00:00.241) 0:00:36.431 *********** 2025-06-22 19:47:33.486341 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-06-22 19:47:33.488086 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-06-22 19:47:33.488907 | orchestrator | 2025-06-22 19:47:33.490129 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-06-22 19:47:33.491012 | orchestrator | Sunday 22 June 2025 19:47:33 +0000 (0:00:00.163) 0:00:36.594 *********** 2025-06-22 19:47:33.616205 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:33.620948 | orchestrator | 2025-06-22 19:47:33.621053 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-06-22 19:47:33.622589 | orchestrator | Sunday 22 June 2025 19:47:33 +0000 (0:00:00.127) 0:00:36.722 *********** 2025-06-22 19:47:33.732137 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:33.732469 | orchestrator | 2025-06-22 19:47:33.733049 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-06-22 19:47:33.733940 | orchestrator | Sunday 22 June 2025 19:47:33 +0000 (0:00:00.119) 0:00:36.841 *********** 2025-06-22 19:47:33.858210 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:33.860425 | orchestrator | 2025-06-22 19:47:33.860987 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-06-22 19:47:33.861592 | orchestrator | Sunday 22 June 2025 19:47:33 +0000 (0:00:00.125) 0:00:36.966 *********** 2025-06-22 19:47:34.198385 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:47:34.199708 | orchestrator | 2025-06-22 19:47:34.200609 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-06-22 19:47:34.204833 | orchestrator | Sunday 22 June 2025 19:47:34 +0000 (0:00:00.340) 0:00:37.307 *********** 2025-06-22 19:47:34.377126 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3108d6cc-64da-58c4-8e22-262ec3caa421'}}) 2025-06-22 19:47:34.378872 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '39fb6ae0-c3e6-59b9-8b54-9251bb7c5136'}}) 2025-06-22 19:47:34.380734 | orchestrator | 2025-06-22 19:47:34.380775 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-06-22 19:47:34.381466 | orchestrator | Sunday 22 June 2025 19:47:34 +0000 (0:00:00.177) 0:00:37.485 *********** 2025-06-22 19:47:34.538976 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3108d6cc-64da-58c4-8e22-262ec3caa421'}})  2025-06-22 19:47:34.540417 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '39fb6ae0-c3e6-59b9-8b54-9251bb7c5136'}})  2025-06-22 19:47:34.541106 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:34.542197 | orchestrator | 2025-06-22 19:47:34.542682 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-06-22 19:47:34.543202 | orchestrator | Sunday 22 June 2025 19:47:34 +0000 (0:00:00.159) 0:00:37.645 *********** 2025-06-22 19:47:34.711508 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3108d6cc-64da-58c4-8e22-262ec3caa421'}})  2025-06-22 19:47:34.712709 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '39fb6ae0-c3e6-59b9-8b54-9251bb7c5136'}})  2025-06-22 19:47:34.715378 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:34.715417 | orchestrator | 2025-06-22 19:47:34.715437 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-06-22 19:47:34.716155 | orchestrator | Sunday 22 June 2025 19:47:34 +0000 (0:00:00.174) 0:00:37.820 *********** 2025-06-22 19:47:34.879354 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3108d6cc-64da-58c4-8e22-262ec3caa421'}})  2025-06-22 19:47:34.880182 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '39fb6ae0-c3e6-59b9-8b54-9251bb7c5136'}})  2025-06-22 19:47:34.880431 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:34.881392 | orchestrator | 2025-06-22 19:47:34.882355 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-06-22 19:47:34.882710 | orchestrator | Sunday 22 June 2025 19:47:34 +0000 (0:00:00.167) 0:00:37.987 *********** 2025-06-22 19:47:35.037963 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:47:35.039585 | orchestrator | 2025-06-22 19:47:35.040411 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-06-22 19:47:35.040903 | orchestrator | Sunday 22 June 2025 19:47:35 +0000 (0:00:00.156) 0:00:38.144 *********** 2025-06-22 19:47:35.204483 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:47:35.206600 | orchestrator | 2025-06-22 19:47:35.207696 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-06-22 19:47:35.208873 | orchestrator | Sunday 22 June 2025 19:47:35 +0000 (0:00:00.168) 0:00:38.313 *********** 2025-06-22 19:47:35.349177 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:35.350197 | orchestrator | 2025-06-22 19:47:35.351499 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-06-22 19:47:35.354386 | orchestrator | Sunday 22 June 2025 19:47:35 +0000 (0:00:00.142) 0:00:38.456 *********** 2025-06-22 19:47:35.468713 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:35.469380 | orchestrator | 2025-06-22 19:47:35.470565 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-06-22 19:47:35.471369 | orchestrator | Sunday 22 June 2025 19:47:35 +0000 (0:00:00.121) 0:00:38.577 *********** 2025-06-22 19:47:35.605868 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:35.606523 | orchestrator | 2025-06-22 19:47:35.607429 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-06-22 19:47:35.608664 | orchestrator | Sunday 22 June 2025 19:47:35 +0000 (0:00:00.136) 0:00:38.714 *********** 2025-06-22 19:47:35.760661 | orchestrator | ok: [testbed-node-5] => { 2025-06-22 19:47:35.763391 | orchestrator |  "ceph_osd_devices": { 2025-06-22 19:47:35.764396 | orchestrator |  "sdb": { 2025-06-22 19:47:35.765255 | orchestrator |  "osd_lvm_uuid": "3108d6cc-64da-58c4-8e22-262ec3caa421" 2025-06-22 19:47:35.766514 | orchestrator |  }, 2025-06-22 19:47:35.767090 | orchestrator |  "sdc": { 2025-06-22 19:47:35.767780 | orchestrator |  "osd_lvm_uuid": "39fb6ae0-c3e6-59b9-8b54-9251bb7c5136" 2025-06-22 19:47:35.768591 | orchestrator |  } 2025-06-22 19:47:35.769173 | orchestrator |  } 2025-06-22 19:47:35.770129 | orchestrator | } 2025-06-22 19:47:35.770447 | orchestrator | 2025-06-22 19:47:35.771018 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-06-22 19:47:35.771621 | orchestrator | Sunday 22 June 2025 19:47:35 +0000 (0:00:00.154) 0:00:38.868 *********** 2025-06-22 19:47:35.897523 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:35.897695 | orchestrator | 2025-06-22 19:47:35.897716 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-06-22 19:47:35.897859 | orchestrator | Sunday 22 June 2025 19:47:35 +0000 (0:00:00.136) 0:00:39.004 *********** 2025-06-22 19:47:36.258561 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:36.259400 | orchestrator | 2025-06-22 19:47:36.261246 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-06-22 19:47:36.262236 | orchestrator | Sunday 22 June 2025 19:47:36 +0000 (0:00:00.360) 0:00:39.364 *********** 2025-06-22 19:47:36.396665 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:47:36.397728 | orchestrator | 2025-06-22 19:47:36.399077 | orchestrator | TASK [Print configuration data] ************************************************ 2025-06-22 19:47:36.400168 | orchestrator | Sunday 22 June 2025 19:47:36 +0000 (0:00:00.140) 0:00:39.505 *********** 2025-06-22 19:47:36.602841 | orchestrator | changed: [testbed-node-5] => { 2025-06-22 19:47:36.603984 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-06-22 19:47:36.604981 | orchestrator |  "ceph_osd_devices": { 2025-06-22 19:47:36.606171 | orchestrator |  "sdb": { 2025-06-22 19:47:36.607248 | orchestrator |  "osd_lvm_uuid": "3108d6cc-64da-58c4-8e22-262ec3caa421" 2025-06-22 19:47:36.608420 | orchestrator |  }, 2025-06-22 19:47:36.609626 | orchestrator |  "sdc": { 2025-06-22 19:47:36.609771 | orchestrator |  "osd_lvm_uuid": "39fb6ae0-c3e6-59b9-8b54-9251bb7c5136" 2025-06-22 19:47:36.610512 | orchestrator |  } 2025-06-22 19:47:36.610756 | orchestrator |  }, 2025-06-22 19:47:36.611460 | orchestrator |  "lvm_volumes": [ 2025-06-22 19:47:36.611887 | orchestrator |  { 2025-06-22 19:47:36.612421 | orchestrator |  "data": "osd-block-3108d6cc-64da-58c4-8e22-262ec3caa421", 2025-06-22 19:47:36.612958 | orchestrator |  "data_vg": "ceph-3108d6cc-64da-58c4-8e22-262ec3caa421" 2025-06-22 19:47:36.613423 | orchestrator |  }, 2025-06-22 19:47:36.613867 | orchestrator |  { 2025-06-22 19:47:36.614290 | orchestrator |  "data": "osd-block-39fb6ae0-c3e6-59b9-8b54-9251bb7c5136", 2025-06-22 19:47:36.614782 | orchestrator |  "data_vg": "ceph-39fb6ae0-c3e6-59b9-8b54-9251bb7c5136" 2025-06-22 19:47:36.615074 | orchestrator |  } 2025-06-22 19:47:36.615509 | orchestrator |  ] 2025-06-22 19:47:36.615864 | orchestrator |  } 2025-06-22 19:47:36.616256 | orchestrator | } 2025-06-22 19:47:36.616699 | orchestrator | 2025-06-22 19:47:36.617174 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-06-22 19:47:36.617444 | orchestrator | Sunday 22 June 2025 19:47:36 +0000 (0:00:00.205) 0:00:39.711 *********** 2025-06-22 19:47:37.612307 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-06-22 19:47:37.612658 | orchestrator | 2025-06-22 19:47:37.615145 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:47:37.615205 | orchestrator | 2025-06-22 19:47:37 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-22 19:47:37.615221 | orchestrator | 2025-06-22 19:47:37 | INFO  | Please wait and do not abort execution. 2025-06-22 19:47:37.615820 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-06-22 19:47:37.617079 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-06-22 19:47:37.618101 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-06-22 19:47:37.618704 | orchestrator | 2025-06-22 19:47:37.619769 | orchestrator | 2025-06-22 19:47:37.621599 | orchestrator | 2025-06-22 19:47:37.622216 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:47:37.623073 | orchestrator | Sunday 22 June 2025 19:47:37 +0000 (0:00:01.009) 0:00:40.720 *********** 2025-06-22 19:47:37.624082 | orchestrator | =============================================================================== 2025-06-22 19:47:37.624805 | orchestrator | Write configuration file ------------------------------------------------ 4.08s 2025-06-22 19:47:37.625442 | orchestrator | Get initial list of available block devices ----------------------------- 1.16s 2025-06-22 19:47:37.626233 | orchestrator | Add known links to the list of available block devices ------------------ 1.14s 2025-06-22 19:47:37.626718 | orchestrator | Add known partitions to the list of available block devices ------------- 1.07s 2025-06-22 19:47:37.627289 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.99s 2025-06-22 19:47:37.627852 | orchestrator | Add known partitions to the list of available block devices ------------- 0.86s 2025-06-22 19:47:37.628227 | orchestrator | Add known links to the list of available block devices ------------------ 0.85s 2025-06-22 19:47:37.628802 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.68s 2025-06-22 19:47:37.629294 | orchestrator | Add known partitions to the list of available block devices ------------- 0.68s 2025-06-22 19:47:37.629654 | orchestrator | Add known links to the list of available block devices ------------------ 0.66s 2025-06-22 19:47:37.630265 | orchestrator | Add known partitions to the list of available block devices ------------- 0.64s 2025-06-22 19:47:37.630737 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.63s 2025-06-22 19:47:37.631319 | orchestrator | Define lvm_volumes structures ------------------------------------------- 0.63s 2025-06-22 19:47:37.631662 | orchestrator | Print DB devices -------------------------------------------------------- 0.62s 2025-06-22 19:47:37.632086 | orchestrator | Print configuration data ------------------------------------------------ 0.60s 2025-06-22 19:47:37.632533 | orchestrator | Add known links to the list of available block devices ------------------ 0.60s 2025-06-22 19:47:37.633388 | orchestrator | Add known links to the list of available block devices ------------------ 0.60s 2025-06-22 19:47:37.633750 | orchestrator | Set WAL devices config data --------------------------------------------- 0.59s 2025-06-22 19:47:37.634252 | orchestrator | Add known links to the list of available block devices ------------------ 0.58s 2025-06-22 19:47:37.634433 | orchestrator | Add known partitions to the list of available block devices ------------- 0.57s 2025-06-22 19:47:49.842968 | orchestrator | Registering Redlock._acquired_script 2025-06-22 19:47:49.843093 | orchestrator | Registering Redlock._extend_script 2025-06-22 19:47:49.843110 | orchestrator | Registering Redlock._release_script 2025-06-22 19:47:49.892019 | orchestrator | 2025-06-22 19:47:49 | INFO  | Task c136216f-f29f-43a9-91cb-cde29ef6fe9a (sync inventory) is running in background. Output coming soon. 2025-06-22 19:48:07.934396 | orchestrator | 2025-06-22 19:47:51 | INFO  | Starting group_vars file reorganization 2025-06-22 19:48:07.934513 | orchestrator | 2025-06-22 19:47:51 | INFO  | Moved 0 file(s) to their respective directories 2025-06-22 19:48:07.934540 | orchestrator | 2025-06-22 19:47:51 | INFO  | Group_vars file reorganization completed 2025-06-22 19:48:07.934563 | orchestrator | 2025-06-22 19:47:52 | INFO  | Starting variable preparation from inventory 2025-06-22 19:48:07.934583 | orchestrator | 2025-06-22 19:47:54 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-06-22 19:48:07.934602 | orchestrator | 2025-06-22 19:47:54 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-06-22 19:48:07.934635 | orchestrator | 2025-06-22 19:47:54 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-06-22 19:48:07.934647 | orchestrator | 2025-06-22 19:47:54 | INFO  | 3 file(s) written, 6 host(s) processed 2025-06-22 19:48:07.934658 | orchestrator | 2025-06-22 19:47:54 | INFO  | Variable preparation completed: 2025-06-22 19:48:07.934669 | orchestrator | 2025-06-22 19:47:55 | INFO  | Starting inventory overwrite handling 2025-06-22 19:48:07.934679 | orchestrator | 2025-06-22 19:47:55 | INFO  | Handling group overwrites in 99-overwrite 2025-06-22 19:48:07.934690 | orchestrator | 2025-06-22 19:47:55 | INFO  | Removing group frr:children from 60-generic 2025-06-22 19:48:07.934701 | orchestrator | 2025-06-22 19:47:55 | INFO  | Removing group storage:children from 50-kolla 2025-06-22 19:48:07.934711 | orchestrator | 2025-06-22 19:47:55 | INFO  | Removing group netbird:children from 50-infrastruture 2025-06-22 19:48:07.934731 | orchestrator | 2025-06-22 19:47:55 | INFO  | Removing group ceph-rgw from 50-ceph 2025-06-22 19:48:07.934742 | orchestrator | 2025-06-22 19:47:55 | INFO  | Removing group ceph-mds from 50-ceph 2025-06-22 19:48:07.934753 | orchestrator | 2025-06-22 19:47:55 | INFO  | Handling group overwrites in 20-roles 2025-06-22 19:48:07.934764 | orchestrator | 2025-06-22 19:47:55 | INFO  | Removing group k3s_node from 50-infrastruture 2025-06-22 19:48:07.934774 | orchestrator | 2025-06-22 19:47:55 | INFO  | Removed 6 group(s) in total 2025-06-22 19:48:07.934785 | orchestrator | 2025-06-22 19:47:55 | INFO  | Inventory overwrite handling completed 2025-06-22 19:48:07.934796 | orchestrator | 2025-06-22 19:47:56 | INFO  | Starting merge of inventory files 2025-06-22 19:48:07.934806 | orchestrator | 2025-06-22 19:47:56 | INFO  | Inventory files merged successfully 2025-06-22 19:48:07.934817 | orchestrator | 2025-06-22 19:48:00 | INFO  | Generating ClusterShell configuration from Ansible inventory 2025-06-22 19:48:07.934828 | orchestrator | 2025-06-22 19:48:06 | INFO  | Successfully wrote ClusterShell configuration 2025-06-22 19:48:07.934838 | orchestrator | [master 0db114a] 2025-06-22-19-48 2025-06-22 19:48:07.934850 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2025-06-22 19:48:09.720610 | orchestrator | 2025-06-22 19:48:09 | INFO  | Task 48310f5c-35aa-4466-97d9-87d124ca2771 (ceph-create-lvm-devices) was prepared for execution. 2025-06-22 19:48:09.720693 | orchestrator | 2025-06-22 19:48:09 | INFO  | It takes a moment until task 48310f5c-35aa-4466-97d9-87d124ca2771 (ceph-create-lvm-devices) has been started and output is visible here. 2025-06-22 19:48:14.603540 | orchestrator | 2025-06-22 19:48:14.604766 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-06-22 19:48:14.605512 | orchestrator | 2025-06-22 19:48:14.605739 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-22 19:48:14.606105 | orchestrator | Sunday 22 June 2025 19:48:14 +0000 (0:00:00.283) 0:00:00.283 *********** 2025-06-22 19:48:14.846176 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-22 19:48:14.846812 | orchestrator | 2025-06-22 19:48:14.847563 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-22 19:48:14.848387 | orchestrator | Sunday 22 June 2025 19:48:14 +0000 (0:00:00.245) 0:00:00.528 *********** 2025-06-22 19:48:15.080018 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:48:15.082950 | orchestrator | 2025-06-22 19:48:15.082984 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:48:15.082998 | orchestrator | Sunday 22 June 2025 19:48:15 +0000 (0:00:00.233) 0:00:00.762 *********** 2025-06-22 19:48:15.443461 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-06-22 19:48:15.444552 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-06-22 19:48:15.445338 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-06-22 19:48:15.445877 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-06-22 19:48:15.446270 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-06-22 19:48:15.447491 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-06-22 19:48:15.453723 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-06-22 19:48:15.453751 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-06-22 19:48:15.453763 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-06-22 19:48:15.453774 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-06-22 19:48:15.453785 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-06-22 19:48:15.453796 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-06-22 19:48:15.453808 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-06-22 19:48:15.453819 | orchestrator | 2025-06-22 19:48:15.454810 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:48:15.455728 | orchestrator | Sunday 22 June 2025 19:48:15 +0000 (0:00:00.362) 0:00:01.125 *********** 2025-06-22 19:48:15.828830 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:48:15.829740 | orchestrator | 2025-06-22 19:48:15.830733 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:48:15.831497 | orchestrator | Sunday 22 June 2025 19:48:15 +0000 (0:00:00.384) 0:00:01.510 *********** 2025-06-22 19:48:16.026855 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:48:16.028200 | orchestrator | 2025-06-22 19:48:16.029113 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:48:16.029135 | orchestrator | Sunday 22 June 2025 19:48:16 +0000 (0:00:00.198) 0:00:01.708 *********** 2025-06-22 19:48:16.206149 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:48:16.207593 | orchestrator | 2025-06-22 19:48:16.208055 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:48:16.208892 | orchestrator | Sunday 22 June 2025 19:48:16 +0000 (0:00:00.179) 0:00:01.888 *********** 2025-06-22 19:48:16.383360 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:48:16.383859 | orchestrator | 2025-06-22 19:48:16.385451 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:48:16.386790 | orchestrator | Sunday 22 June 2025 19:48:16 +0000 (0:00:00.177) 0:00:02.065 *********** 2025-06-22 19:48:16.575185 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:48:16.575984 | orchestrator | 2025-06-22 19:48:16.577954 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:48:16.578475 | orchestrator | Sunday 22 June 2025 19:48:16 +0000 (0:00:00.191) 0:00:02.256 *********** 2025-06-22 19:48:16.803141 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:48:16.803275 | orchestrator | 2025-06-22 19:48:16.804171 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:48:16.805222 | orchestrator | Sunday 22 June 2025 19:48:16 +0000 (0:00:00.226) 0:00:02.483 *********** 2025-06-22 19:48:16.989252 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:48:16.990822 | orchestrator | 2025-06-22 19:48:16.990963 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:48:16.990984 | orchestrator | Sunday 22 June 2025 19:48:16 +0000 (0:00:00.189) 0:00:02.672 *********** 2025-06-22 19:48:17.186375 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:48:17.186893 | orchestrator | 2025-06-22 19:48:17.188200 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:48:17.188946 | orchestrator | Sunday 22 June 2025 19:48:17 +0000 (0:00:00.195) 0:00:02.868 *********** 2025-06-22 19:48:17.565448 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_2156dda8-7e6f-4624-a0c0-e6117c9c49b9) 2025-06-22 19:48:17.566162 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_2156dda8-7e6f-4624-a0c0-e6117c9c49b9) 2025-06-22 19:48:17.567347 | orchestrator | 2025-06-22 19:48:17.568662 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:48:17.569558 | orchestrator | Sunday 22 June 2025 19:48:17 +0000 (0:00:00.378) 0:00:03.246 *********** 2025-06-22 19:48:17.954426 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_78e15a4e-0b6b-4de0-bd2a-417fc55af8a3) 2025-06-22 19:48:17.954604 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_78e15a4e-0b6b-4de0-bd2a-417fc55af8a3) 2025-06-22 19:48:17.955575 | orchestrator | 2025-06-22 19:48:17.956484 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:48:17.957202 | orchestrator | Sunday 22 June 2025 19:48:17 +0000 (0:00:00.387) 0:00:03.634 *********** 2025-06-22 19:48:18.487928 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_0d04e2ba-3abe-44e6-a0ea-4a597e46ae81) 2025-06-22 19:48:18.488665 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_0d04e2ba-3abe-44e6-a0ea-4a597e46ae81) 2025-06-22 19:48:18.489610 | orchestrator | 2025-06-22 19:48:18.490603 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:48:18.491523 | orchestrator | Sunday 22 June 2025 19:48:18 +0000 (0:00:00.534) 0:00:04.169 *********** 2025-06-22 19:48:19.054703 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_67ec265c-9b93-46b0-85f4-348a71cc884e) 2025-06-22 19:48:19.055452 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_67ec265c-9b93-46b0-85f4-348a71cc884e) 2025-06-22 19:48:19.057558 | orchestrator | 2025-06-22 19:48:19.060063 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:48:19.061872 | orchestrator | Sunday 22 June 2025 19:48:19 +0000 (0:00:00.564) 0:00:04.734 *********** 2025-06-22 19:48:19.771752 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-22 19:48:19.772843 | orchestrator | 2025-06-22 19:48:19.774375 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:48:19.775260 | orchestrator | Sunday 22 June 2025 19:48:19 +0000 (0:00:00.718) 0:00:05.452 *********** 2025-06-22 19:48:20.216349 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-06-22 19:48:20.216876 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-06-22 19:48:20.217979 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-06-22 19:48:20.219074 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-06-22 19:48:20.219762 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-06-22 19:48:20.220463 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-06-22 19:48:20.220950 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-06-22 19:48:20.221658 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-06-22 19:48:20.222328 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-06-22 19:48:20.222720 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-06-22 19:48:20.223273 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-06-22 19:48:20.223756 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-06-22 19:48:20.224300 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-06-22 19:48:20.224756 | orchestrator | 2025-06-22 19:48:20.225161 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:48:20.225638 | orchestrator | Sunday 22 June 2025 19:48:20 +0000 (0:00:00.445) 0:00:05.898 *********** 2025-06-22 19:48:20.403550 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:48:20.403751 | orchestrator | 2025-06-22 19:48:20.404857 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:48:20.405601 | orchestrator | Sunday 22 June 2025 19:48:20 +0000 (0:00:00.188) 0:00:06.086 *********** 2025-06-22 19:48:20.597442 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:48:20.597663 | orchestrator | 2025-06-22 19:48:20.598563 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:48:20.599344 | orchestrator | Sunday 22 June 2025 19:48:20 +0000 (0:00:00.191) 0:00:06.277 *********** 2025-06-22 19:48:20.781512 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:48:20.782366 | orchestrator | 2025-06-22 19:48:20.783195 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:48:20.784570 | orchestrator | Sunday 22 June 2025 19:48:20 +0000 (0:00:00.184) 0:00:06.461 *********** 2025-06-22 19:48:20.961969 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:48:20.962171 | orchestrator | 2025-06-22 19:48:20.962355 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:48:20.963259 | orchestrator | Sunday 22 June 2025 19:48:20 +0000 (0:00:00.182) 0:00:06.644 *********** 2025-06-22 19:48:21.150907 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:48:21.151535 | orchestrator | 2025-06-22 19:48:21.153032 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:48:21.153915 | orchestrator | Sunday 22 June 2025 19:48:21 +0000 (0:00:00.188) 0:00:06.833 *********** 2025-06-22 19:48:21.329196 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:48:21.330588 | orchestrator | 2025-06-22 19:48:21.331314 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:48:21.331455 | orchestrator | Sunday 22 June 2025 19:48:21 +0000 (0:00:00.178) 0:00:07.011 *********** 2025-06-22 19:48:21.511475 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:48:21.511554 | orchestrator | 2025-06-22 19:48:21.512154 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:48:21.512602 | orchestrator | Sunday 22 June 2025 19:48:21 +0000 (0:00:00.181) 0:00:07.192 *********** 2025-06-22 19:48:21.687972 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:48:21.688695 | orchestrator | 2025-06-22 19:48:21.689275 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:48:21.690073 | orchestrator | Sunday 22 June 2025 19:48:21 +0000 (0:00:00.177) 0:00:07.370 *********** 2025-06-22 19:48:22.599768 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-06-22 19:48:22.600793 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-06-22 19:48:22.602462 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-06-22 19:48:22.603155 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-06-22 19:48:22.603788 | orchestrator | 2025-06-22 19:48:22.604287 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:48:22.604976 | orchestrator | Sunday 22 June 2025 19:48:22 +0000 (0:00:00.910) 0:00:08.281 *********** 2025-06-22 19:48:22.810751 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:48:22.811197 | orchestrator | 2025-06-22 19:48:22.812311 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:48:22.813417 | orchestrator | Sunday 22 June 2025 19:48:22 +0000 (0:00:00.210) 0:00:08.491 *********** 2025-06-22 19:48:22.995648 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:48:22.995748 | orchestrator | 2025-06-22 19:48:22.996082 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:48:22.996891 | orchestrator | Sunday 22 June 2025 19:48:22 +0000 (0:00:00.184) 0:00:08.676 *********** 2025-06-22 19:48:23.173215 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:48:23.173386 | orchestrator | 2025-06-22 19:48:23.174215 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:48:23.174990 | orchestrator | Sunday 22 June 2025 19:48:23 +0000 (0:00:00.178) 0:00:08.854 *********** 2025-06-22 19:48:23.334136 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:48:23.334612 | orchestrator | 2025-06-22 19:48:23.335881 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-06-22 19:48:23.336866 | orchestrator | Sunday 22 June 2025 19:48:23 +0000 (0:00:00.161) 0:00:09.016 *********** 2025-06-22 19:48:23.470467 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:48:23.470939 | orchestrator | 2025-06-22 19:48:23.471793 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-06-22 19:48:23.472144 | orchestrator | Sunday 22 June 2025 19:48:23 +0000 (0:00:00.134) 0:00:09.150 *********** 2025-06-22 19:48:23.636708 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'ffee4eed-4396-59ea-b922-2a73e3bf4ca0'}}) 2025-06-22 19:48:23.637392 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a67f9737-0c9f-5177-b2d5-f4c811291d8a'}}) 2025-06-22 19:48:23.638949 | orchestrator | 2025-06-22 19:48:23.639410 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-06-22 19:48:23.639703 | orchestrator | Sunday 22 June 2025 19:48:23 +0000 (0:00:00.168) 0:00:09.319 *********** 2025-06-22 19:48:25.559312 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-ffee4eed-4396-59ea-b922-2a73e3bf4ca0', 'data_vg': 'ceph-ffee4eed-4396-59ea-b922-2a73e3bf4ca0'}) 2025-06-22 19:48:25.560428 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-a67f9737-0c9f-5177-b2d5-f4c811291d8a', 'data_vg': 'ceph-a67f9737-0c9f-5177-b2d5-f4c811291d8a'}) 2025-06-22 19:48:25.561305 | orchestrator | 2025-06-22 19:48:25.562480 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-06-22 19:48:25.563410 | orchestrator | Sunday 22 June 2025 19:48:25 +0000 (0:00:01.922) 0:00:11.241 *********** 2025-06-22 19:48:25.706251 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ffee4eed-4396-59ea-b922-2a73e3bf4ca0', 'data_vg': 'ceph-ffee4eed-4396-59ea-b922-2a73e3bf4ca0'})  2025-06-22 19:48:25.706323 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a67f9737-0c9f-5177-b2d5-f4c811291d8a', 'data_vg': 'ceph-a67f9737-0c9f-5177-b2d5-f4c811291d8a'})  2025-06-22 19:48:25.707120 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:48:25.707289 | orchestrator | 2025-06-22 19:48:25.708404 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-06-22 19:48:25.708430 | orchestrator | Sunday 22 June 2025 19:48:25 +0000 (0:00:00.147) 0:00:11.388 *********** 2025-06-22 19:48:27.129806 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-ffee4eed-4396-59ea-b922-2a73e3bf4ca0', 'data_vg': 'ceph-ffee4eed-4396-59ea-b922-2a73e3bf4ca0'}) 2025-06-22 19:48:27.129878 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-a67f9737-0c9f-5177-b2d5-f4c811291d8a', 'data_vg': 'ceph-a67f9737-0c9f-5177-b2d5-f4c811291d8a'}) 2025-06-22 19:48:27.130746 | orchestrator | 2025-06-22 19:48:27.132257 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-06-22 19:48:27.133003 | orchestrator | Sunday 22 June 2025 19:48:27 +0000 (0:00:01.421) 0:00:12.810 *********** 2025-06-22 19:48:27.267463 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ffee4eed-4396-59ea-b922-2a73e3bf4ca0', 'data_vg': 'ceph-ffee4eed-4396-59ea-b922-2a73e3bf4ca0'})  2025-06-22 19:48:27.268582 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a67f9737-0c9f-5177-b2d5-f4c811291d8a', 'data_vg': 'ceph-a67f9737-0c9f-5177-b2d5-f4c811291d8a'})  2025-06-22 19:48:27.269532 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:48:27.270421 | orchestrator | 2025-06-22 19:48:27.271384 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-06-22 19:48:27.272532 | orchestrator | Sunday 22 June 2025 19:48:27 +0000 (0:00:00.140) 0:00:12.950 *********** 2025-06-22 19:48:27.408166 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:48:27.408282 | orchestrator | 2025-06-22 19:48:27.408357 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-06-22 19:48:27.408588 | orchestrator | Sunday 22 June 2025 19:48:27 +0000 (0:00:00.140) 0:00:13.090 *********** 2025-06-22 19:48:27.699832 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ffee4eed-4396-59ea-b922-2a73e3bf4ca0', 'data_vg': 'ceph-ffee4eed-4396-59ea-b922-2a73e3bf4ca0'})  2025-06-22 19:48:27.700087 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a67f9737-0c9f-5177-b2d5-f4c811291d8a', 'data_vg': 'ceph-a67f9737-0c9f-5177-b2d5-f4c811291d8a'})  2025-06-22 19:48:27.701518 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:48:27.702147 | orchestrator | 2025-06-22 19:48:27.702616 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-06-22 19:48:27.703134 | orchestrator | Sunday 22 June 2025 19:48:27 +0000 (0:00:00.289) 0:00:13.380 *********** 2025-06-22 19:48:27.832464 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:48:27.833494 | orchestrator | 2025-06-22 19:48:27.835285 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-06-22 19:48:27.835313 | orchestrator | Sunday 22 June 2025 19:48:27 +0000 (0:00:00.134) 0:00:13.515 *********** 2025-06-22 19:48:27.971030 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ffee4eed-4396-59ea-b922-2a73e3bf4ca0', 'data_vg': 'ceph-ffee4eed-4396-59ea-b922-2a73e3bf4ca0'})  2025-06-22 19:48:27.971155 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a67f9737-0c9f-5177-b2d5-f4c811291d8a', 'data_vg': 'ceph-a67f9737-0c9f-5177-b2d5-f4c811291d8a'})  2025-06-22 19:48:27.971762 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:48:27.972885 | orchestrator | 2025-06-22 19:48:27.973727 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-06-22 19:48:27.973918 | orchestrator | Sunday 22 June 2025 19:48:27 +0000 (0:00:00.137) 0:00:13.652 *********** 2025-06-22 19:48:28.095478 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:48:28.096817 | orchestrator | 2025-06-22 19:48:28.098519 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-06-22 19:48:28.098893 | orchestrator | Sunday 22 June 2025 19:48:28 +0000 (0:00:00.124) 0:00:13.777 *********** 2025-06-22 19:48:28.236238 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ffee4eed-4396-59ea-b922-2a73e3bf4ca0', 'data_vg': 'ceph-ffee4eed-4396-59ea-b922-2a73e3bf4ca0'})  2025-06-22 19:48:28.236896 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a67f9737-0c9f-5177-b2d5-f4c811291d8a', 'data_vg': 'ceph-a67f9737-0c9f-5177-b2d5-f4c811291d8a'})  2025-06-22 19:48:28.238243 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:48:28.239187 | orchestrator | 2025-06-22 19:48:28.240183 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-06-22 19:48:28.240923 | orchestrator | Sunday 22 June 2025 19:48:28 +0000 (0:00:00.140) 0:00:13.918 *********** 2025-06-22 19:48:28.378296 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:48:28.379778 | orchestrator | 2025-06-22 19:48:28.379822 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-06-22 19:48:28.380558 | orchestrator | Sunday 22 June 2025 19:48:28 +0000 (0:00:00.141) 0:00:14.060 *********** 2025-06-22 19:48:28.528384 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ffee4eed-4396-59ea-b922-2a73e3bf4ca0', 'data_vg': 'ceph-ffee4eed-4396-59ea-b922-2a73e3bf4ca0'})  2025-06-22 19:48:28.529190 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a67f9737-0c9f-5177-b2d5-f4c811291d8a', 'data_vg': 'ceph-a67f9737-0c9f-5177-b2d5-f4c811291d8a'})  2025-06-22 19:48:28.530621 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:48:28.531992 | orchestrator | 2025-06-22 19:48:28.532967 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-06-22 19:48:28.533674 | orchestrator | Sunday 22 June 2025 19:48:28 +0000 (0:00:00.150) 0:00:14.210 *********** 2025-06-22 19:48:28.674065 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ffee4eed-4396-59ea-b922-2a73e3bf4ca0', 'data_vg': 'ceph-ffee4eed-4396-59ea-b922-2a73e3bf4ca0'})  2025-06-22 19:48:28.674324 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a67f9737-0c9f-5177-b2d5-f4c811291d8a', 'data_vg': 'ceph-a67f9737-0c9f-5177-b2d5-f4c811291d8a'})  2025-06-22 19:48:28.675689 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:48:28.677140 | orchestrator | 2025-06-22 19:48:28.678233 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-06-22 19:48:28.679283 | orchestrator | Sunday 22 June 2025 19:48:28 +0000 (0:00:00.143) 0:00:14.354 *********** 2025-06-22 19:48:28.813988 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ffee4eed-4396-59ea-b922-2a73e3bf4ca0', 'data_vg': 'ceph-ffee4eed-4396-59ea-b922-2a73e3bf4ca0'})  2025-06-22 19:48:28.814796 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a67f9737-0c9f-5177-b2d5-f4c811291d8a', 'data_vg': 'ceph-a67f9737-0c9f-5177-b2d5-f4c811291d8a'})  2025-06-22 19:48:28.815797 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:48:28.816843 | orchestrator | 2025-06-22 19:48:28.816936 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-06-22 19:48:28.817684 | orchestrator | Sunday 22 June 2025 19:48:28 +0000 (0:00:00.140) 0:00:14.495 *********** 2025-06-22 19:48:28.947144 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:48:28.947595 | orchestrator | 2025-06-22 19:48:28.948679 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-06-22 19:48:28.949261 | orchestrator | Sunday 22 June 2025 19:48:28 +0000 (0:00:00.133) 0:00:14.629 *********** 2025-06-22 19:48:29.076647 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:48:29.077147 | orchestrator | 2025-06-22 19:48:29.077319 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-06-22 19:48:29.078810 | orchestrator | Sunday 22 June 2025 19:48:29 +0000 (0:00:00.128) 0:00:14.758 *********** 2025-06-22 19:48:29.194803 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:48:29.195944 | orchestrator | 2025-06-22 19:48:29.196437 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-06-22 19:48:29.197227 | orchestrator | Sunday 22 June 2025 19:48:29 +0000 (0:00:00.118) 0:00:14.877 *********** 2025-06-22 19:48:29.476089 | orchestrator | ok: [testbed-node-3] => { 2025-06-22 19:48:29.476630 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-06-22 19:48:29.478184 | orchestrator | } 2025-06-22 19:48:29.481057 | orchestrator | 2025-06-22 19:48:29.481741 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-06-22 19:48:29.483229 | orchestrator | Sunday 22 June 2025 19:48:29 +0000 (0:00:00.279) 0:00:15.156 *********** 2025-06-22 19:48:29.608187 | orchestrator | ok: [testbed-node-3] => { 2025-06-22 19:48:29.608266 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-06-22 19:48:29.609016 | orchestrator | } 2025-06-22 19:48:29.609966 | orchestrator | 2025-06-22 19:48:29.611059 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-06-22 19:48:29.611505 | orchestrator | Sunday 22 June 2025 19:48:29 +0000 (0:00:00.132) 0:00:15.289 *********** 2025-06-22 19:48:29.739599 | orchestrator | ok: [testbed-node-3] => { 2025-06-22 19:48:29.740008 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-06-22 19:48:29.740867 | orchestrator | } 2025-06-22 19:48:29.741453 | orchestrator | 2025-06-22 19:48:29.742258 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-06-22 19:48:29.742922 | orchestrator | Sunday 22 June 2025 19:48:29 +0000 (0:00:00.131) 0:00:15.421 *********** 2025-06-22 19:48:30.369903 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:48:30.370640 | orchestrator | 2025-06-22 19:48:30.371973 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-06-22 19:48:30.372906 | orchestrator | Sunday 22 June 2025 19:48:30 +0000 (0:00:00.630) 0:00:16.052 *********** 2025-06-22 19:48:30.880330 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:48:30.880994 | orchestrator | 2025-06-22 19:48:30.881925 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-06-22 19:48:30.882438 | orchestrator | Sunday 22 June 2025 19:48:30 +0000 (0:00:00.509) 0:00:16.561 *********** 2025-06-22 19:48:31.385788 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:48:31.386907 | orchestrator | 2025-06-22 19:48:31.387548 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-06-22 19:48:31.388968 | orchestrator | Sunday 22 June 2025 19:48:31 +0000 (0:00:00.505) 0:00:17.067 *********** 2025-06-22 19:48:31.518295 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:48:31.518379 | orchestrator | 2025-06-22 19:48:31.518394 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-06-22 19:48:31.518557 | orchestrator | Sunday 22 June 2025 19:48:31 +0000 (0:00:00.133) 0:00:17.200 *********** 2025-06-22 19:48:31.627589 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:48:31.628073 | orchestrator | 2025-06-22 19:48:31.629057 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-06-22 19:48:31.630089 | orchestrator | Sunday 22 June 2025 19:48:31 +0000 (0:00:00.109) 0:00:17.310 *********** 2025-06-22 19:48:31.736994 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:48:31.739218 | orchestrator | 2025-06-22 19:48:31.739252 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-06-22 19:48:31.739697 | orchestrator | Sunday 22 June 2025 19:48:31 +0000 (0:00:00.108) 0:00:17.418 *********** 2025-06-22 19:48:31.854219 | orchestrator | ok: [testbed-node-3] => { 2025-06-22 19:48:31.855314 | orchestrator |  "vgs_report": { 2025-06-22 19:48:31.856561 | orchestrator |  "vg": [] 2025-06-22 19:48:31.857553 | orchestrator |  } 2025-06-22 19:48:31.858350 | orchestrator | } 2025-06-22 19:48:31.859263 | orchestrator | 2025-06-22 19:48:31.859777 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-06-22 19:48:31.860424 | orchestrator | Sunday 22 June 2025 19:48:31 +0000 (0:00:00.118) 0:00:17.536 *********** 2025-06-22 19:48:31.983021 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:48:31.984754 | orchestrator | 2025-06-22 19:48:31.985783 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-06-22 19:48:31.986572 | orchestrator | Sunday 22 June 2025 19:48:31 +0000 (0:00:00.128) 0:00:17.665 *********** 2025-06-22 19:48:32.115697 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:48:32.116658 | orchestrator | 2025-06-22 19:48:32.117599 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-06-22 19:48:32.118127 | orchestrator | Sunday 22 June 2025 19:48:32 +0000 (0:00:00.131) 0:00:17.796 *********** 2025-06-22 19:48:32.462984 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:48:32.463289 | orchestrator | 2025-06-22 19:48:32.463798 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-06-22 19:48:32.464511 | orchestrator | Sunday 22 June 2025 19:48:32 +0000 (0:00:00.347) 0:00:18.144 *********** 2025-06-22 19:48:32.603973 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:48:32.604502 | orchestrator | 2025-06-22 19:48:32.605520 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-06-22 19:48:32.606664 | orchestrator | Sunday 22 June 2025 19:48:32 +0000 (0:00:00.141) 0:00:18.286 *********** 2025-06-22 19:48:32.752658 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:48:32.753146 | orchestrator | 2025-06-22 19:48:32.754240 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-06-22 19:48:32.755152 | orchestrator | Sunday 22 June 2025 19:48:32 +0000 (0:00:00.146) 0:00:18.432 *********** 2025-06-22 19:48:32.927413 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:48:32.928282 | orchestrator | 2025-06-22 19:48:32.929369 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-06-22 19:48:32.930403 | orchestrator | Sunday 22 June 2025 19:48:32 +0000 (0:00:00.175) 0:00:18.608 *********** 2025-06-22 19:48:33.079209 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:48:33.079812 | orchestrator | 2025-06-22 19:48:33.080582 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-06-22 19:48:33.081161 | orchestrator | Sunday 22 June 2025 19:48:33 +0000 (0:00:00.152) 0:00:18.761 *********** 2025-06-22 19:48:33.222422 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:48:33.222935 | orchestrator | 2025-06-22 19:48:33.224191 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-06-22 19:48:33.225404 | orchestrator | Sunday 22 June 2025 19:48:33 +0000 (0:00:00.142) 0:00:18.904 *********** 2025-06-22 19:48:33.368482 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:48:33.369026 | orchestrator | 2025-06-22 19:48:33.369968 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-06-22 19:48:33.371535 | orchestrator | Sunday 22 June 2025 19:48:33 +0000 (0:00:00.145) 0:00:19.049 *********** 2025-06-22 19:48:33.512375 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:48:33.512990 | orchestrator | 2025-06-22 19:48:33.515054 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-06-22 19:48:33.515676 | orchestrator | Sunday 22 June 2025 19:48:33 +0000 (0:00:00.143) 0:00:19.192 *********** 2025-06-22 19:48:33.657200 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:48:33.658822 | orchestrator | 2025-06-22 19:48:33.659302 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-06-22 19:48:33.660249 | orchestrator | Sunday 22 June 2025 19:48:33 +0000 (0:00:00.145) 0:00:19.338 *********** 2025-06-22 19:48:33.801596 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:48:33.804691 | orchestrator | 2025-06-22 19:48:33.804790 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-06-22 19:48:33.804805 | orchestrator | Sunday 22 June 2025 19:48:33 +0000 (0:00:00.137) 0:00:19.476 *********** 2025-06-22 19:48:33.930319 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:48:33.930643 | orchestrator | 2025-06-22 19:48:33.932299 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-06-22 19:48:33.933951 | orchestrator | Sunday 22 June 2025 19:48:33 +0000 (0:00:00.135) 0:00:19.611 *********** 2025-06-22 19:48:34.066490 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:48:34.067512 | orchestrator | 2025-06-22 19:48:34.068392 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-06-22 19:48:34.069761 | orchestrator | Sunday 22 June 2025 19:48:34 +0000 (0:00:00.136) 0:00:19.748 *********** 2025-06-22 19:48:34.230544 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ffee4eed-4396-59ea-b922-2a73e3bf4ca0', 'data_vg': 'ceph-ffee4eed-4396-59ea-b922-2a73e3bf4ca0'})  2025-06-22 19:48:34.231349 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a67f9737-0c9f-5177-b2d5-f4c811291d8a', 'data_vg': 'ceph-a67f9737-0c9f-5177-b2d5-f4c811291d8a'})  2025-06-22 19:48:34.232325 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:48:34.233030 | orchestrator | 2025-06-22 19:48:34.234645 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-06-22 19:48:34.235302 | orchestrator | Sunday 22 June 2025 19:48:34 +0000 (0:00:00.164) 0:00:19.912 *********** 2025-06-22 19:48:34.614951 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ffee4eed-4396-59ea-b922-2a73e3bf4ca0', 'data_vg': 'ceph-ffee4eed-4396-59ea-b922-2a73e3bf4ca0'})  2025-06-22 19:48:34.616544 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a67f9737-0c9f-5177-b2d5-f4c811291d8a', 'data_vg': 'ceph-a67f9737-0c9f-5177-b2d5-f4c811291d8a'})  2025-06-22 19:48:34.618820 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:48:34.618857 | orchestrator | 2025-06-22 19:48:34.619053 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-06-22 19:48:34.620176 | orchestrator | Sunday 22 June 2025 19:48:34 +0000 (0:00:00.384) 0:00:20.296 *********** 2025-06-22 19:48:34.803322 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ffee4eed-4396-59ea-b922-2a73e3bf4ca0', 'data_vg': 'ceph-ffee4eed-4396-59ea-b922-2a73e3bf4ca0'})  2025-06-22 19:48:34.804838 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a67f9737-0c9f-5177-b2d5-f4c811291d8a', 'data_vg': 'ceph-a67f9737-0c9f-5177-b2d5-f4c811291d8a'})  2025-06-22 19:48:34.806917 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:48:34.807668 | orchestrator | 2025-06-22 19:48:34.808711 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-06-22 19:48:34.809628 | orchestrator | Sunday 22 June 2025 19:48:34 +0000 (0:00:00.184) 0:00:20.481 *********** 2025-06-22 19:48:34.956608 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ffee4eed-4396-59ea-b922-2a73e3bf4ca0', 'data_vg': 'ceph-ffee4eed-4396-59ea-b922-2a73e3bf4ca0'})  2025-06-22 19:48:34.957918 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a67f9737-0c9f-5177-b2d5-f4c811291d8a', 'data_vg': 'ceph-a67f9737-0c9f-5177-b2d5-f4c811291d8a'})  2025-06-22 19:48:34.958127 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:48:34.959701 | orchestrator | 2025-06-22 19:48:34.960713 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-06-22 19:48:34.961521 | orchestrator | Sunday 22 June 2025 19:48:34 +0000 (0:00:00.157) 0:00:20.638 *********** 2025-06-22 19:48:35.128412 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ffee4eed-4396-59ea-b922-2a73e3bf4ca0', 'data_vg': 'ceph-ffee4eed-4396-59ea-b922-2a73e3bf4ca0'})  2025-06-22 19:48:35.128760 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a67f9737-0c9f-5177-b2d5-f4c811291d8a', 'data_vg': 'ceph-a67f9737-0c9f-5177-b2d5-f4c811291d8a'})  2025-06-22 19:48:35.129841 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:48:35.131630 | orchestrator | 2025-06-22 19:48:35.132315 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-06-22 19:48:35.133359 | orchestrator | Sunday 22 June 2025 19:48:35 +0000 (0:00:00.171) 0:00:20.809 *********** 2025-06-22 19:48:35.284094 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ffee4eed-4396-59ea-b922-2a73e3bf4ca0', 'data_vg': 'ceph-ffee4eed-4396-59ea-b922-2a73e3bf4ca0'})  2025-06-22 19:48:35.285740 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a67f9737-0c9f-5177-b2d5-f4c811291d8a', 'data_vg': 'ceph-a67f9737-0c9f-5177-b2d5-f4c811291d8a'})  2025-06-22 19:48:35.286502 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:48:35.288583 | orchestrator | 2025-06-22 19:48:35.288888 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-06-22 19:48:35.289604 | orchestrator | Sunday 22 June 2025 19:48:35 +0000 (0:00:00.155) 0:00:20.964 *********** 2025-06-22 19:48:35.461477 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ffee4eed-4396-59ea-b922-2a73e3bf4ca0', 'data_vg': 'ceph-ffee4eed-4396-59ea-b922-2a73e3bf4ca0'})  2025-06-22 19:48:35.461773 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a67f9737-0c9f-5177-b2d5-f4c811291d8a', 'data_vg': 'ceph-a67f9737-0c9f-5177-b2d5-f4c811291d8a'})  2025-06-22 19:48:35.463030 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:48:35.464208 | orchestrator | 2025-06-22 19:48:35.465362 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-06-22 19:48:35.465761 | orchestrator | Sunday 22 June 2025 19:48:35 +0000 (0:00:00.176) 0:00:21.141 *********** 2025-06-22 19:48:35.618611 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ffee4eed-4396-59ea-b922-2a73e3bf4ca0', 'data_vg': 'ceph-ffee4eed-4396-59ea-b922-2a73e3bf4ca0'})  2025-06-22 19:48:35.618743 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a67f9737-0c9f-5177-b2d5-f4c811291d8a', 'data_vg': 'ceph-a67f9737-0c9f-5177-b2d5-f4c811291d8a'})  2025-06-22 19:48:35.619890 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:48:35.620294 | orchestrator | 2025-06-22 19:48:35.621200 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-06-22 19:48:35.621781 | orchestrator | Sunday 22 June 2025 19:48:35 +0000 (0:00:00.159) 0:00:21.300 *********** 2025-06-22 19:48:36.173182 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:48:36.174723 | orchestrator | 2025-06-22 19:48:36.175900 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-06-22 19:48:36.177175 | orchestrator | Sunday 22 June 2025 19:48:36 +0000 (0:00:00.552) 0:00:21.853 *********** 2025-06-22 19:48:36.728866 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:48:36.729590 | orchestrator | 2025-06-22 19:48:36.730513 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-06-22 19:48:36.731241 | orchestrator | Sunday 22 June 2025 19:48:36 +0000 (0:00:00.557) 0:00:22.411 *********** 2025-06-22 19:48:36.875364 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:48:36.877979 | orchestrator | 2025-06-22 19:48:36.878074 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-06-22 19:48:36.878091 | orchestrator | Sunday 22 June 2025 19:48:36 +0000 (0:00:00.143) 0:00:22.554 *********** 2025-06-22 19:48:37.053736 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-a67f9737-0c9f-5177-b2d5-f4c811291d8a', 'vg_name': 'ceph-a67f9737-0c9f-5177-b2d5-f4c811291d8a'}) 2025-06-22 19:48:37.054685 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-ffee4eed-4396-59ea-b922-2a73e3bf4ca0', 'vg_name': 'ceph-ffee4eed-4396-59ea-b922-2a73e3bf4ca0'}) 2025-06-22 19:48:37.055195 | orchestrator | 2025-06-22 19:48:37.056079 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-06-22 19:48:37.057049 | orchestrator | Sunday 22 June 2025 19:48:37 +0000 (0:00:00.180) 0:00:22.734 *********** 2025-06-22 19:48:37.218449 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ffee4eed-4396-59ea-b922-2a73e3bf4ca0', 'data_vg': 'ceph-ffee4eed-4396-59ea-b922-2a73e3bf4ca0'})  2025-06-22 19:48:37.220298 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a67f9737-0c9f-5177-b2d5-f4c811291d8a', 'data_vg': 'ceph-a67f9737-0c9f-5177-b2d5-f4c811291d8a'})  2025-06-22 19:48:37.221204 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:48:37.221969 | orchestrator | 2025-06-22 19:48:37.222970 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-06-22 19:48:37.223799 | orchestrator | Sunday 22 June 2025 19:48:37 +0000 (0:00:00.164) 0:00:22.899 *********** 2025-06-22 19:48:37.501652 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ffee4eed-4396-59ea-b922-2a73e3bf4ca0', 'data_vg': 'ceph-ffee4eed-4396-59ea-b922-2a73e3bf4ca0'})  2025-06-22 19:48:37.503128 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a67f9737-0c9f-5177-b2d5-f4c811291d8a', 'data_vg': 'ceph-a67f9737-0c9f-5177-b2d5-f4c811291d8a'})  2025-06-22 19:48:37.504688 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:48:37.504814 | orchestrator | 2025-06-22 19:48:37.505991 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-06-22 19:48:37.506964 | orchestrator | Sunday 22 June 2025 19:48:37 +0000 (0:00:00.284) 0:00:23.184 *********** 2025-06-22 19:48:37.649962 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-ffee4eed-4396-59ea-b922-2a73e3bf4ca0', 'data_vg': 'ceph-ffee4eed-4396-59ea-b922-2a73e3bf4ca0'})  2025-06-22 19:48:37.650158 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a67f9737-0c9f-5177-b2d5-f4c811291d8a', 'data_vg': 'ceph-a67f9737-0c9f-5177-b2d5-f4c811291d8a'})  2025-06-22 19:48:37.650323 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:48:37.650861 | orchestrator | 2025-06-22 19:48:37.651812 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-06-22 19:48:37.652469 | orchestrator | Sunday 22 June 2025 19:48:37 +0000 (0:00:00.146) 0:00:23.330 *********** 2025-06-22 19:48:37.930673 | orchestrator | ok: [testbed-node-3] => { 2025-06-22 19:48:37.931360 | orchestrator |  "lvm_report": { 2025-06-22 19:48:37.932354 | orchestrator |  "lv": [ 2025-06-22 19:48:37.933442 | orchestrator |  { 2025-06-22 19:48:37.934524 | orchestrator |  "lv_name": "osd-block-a67f9737-0c9f-5177-b2d5-f4c811291d8a", 2025-06-22 19:48:37.935728 | orchestrator |  "vg_name": "ceph-a67f9737-0c9f-5177-b2d5-f4c811291d8a" 2025-06-22 19:48:37.936551 | orchestrator |  }, 2025-06-22 19:48:37.937611 | orchestrator |  { 2025-06-22 19:48:37.938292 | orchestrator |  "lv_name": "osd-block-ffee4eed-4396-59ea-b922-2a73e3bf4ca0", 2025-06-22 19:48:37.939350 | orchestrator |  "vg_name": "ceph-ffee4eed-4396-59ea-b922-2a73e3bf4ca0" 2025-06-22 19:48:37.939798 | orchestrator |  } 2025-06-22 19:48:37.940707 | orchestrator |  ], 2025-06-22 19:48:37.940891 | orchestrator |  "pv": [ 2025-06-22 19:48:37.941757 | orchestrator |  { 2025-06-22 19:48:37.942430 | orchestrator |  "pv_name": "/dev/sdb", 2025-06-22 19:48:37.943044 | orchestrator |  "vg_name": "ceph-ffee4eed-4396-59ea-b922-2a73e3bf4ca0" 2025-06-22 19:48:37.943555 | orchestrator |  }, 2025-06-22 19:48:37.944041 | orchestrator |  { 2025-06-22 19:48:37.944919 | orchestrator |  "pv_name": "/dev/sdc", 2025-06-22 19:48:37.945254 | orchestrator |  "vg_name": "ceph-a67f9737-0c9f-5177-b2d5-f4c811291d8a" 2025-06-22 19:48:37.945862 | orchestrator |  } 2025-06-22 19:48:37.946757 | orchestrator |  ] 2025-06-22 19:48:37.947487 | orchestrator |  } 2025-06-22 19:48:37.948204 | orchestrator | } 2025-06-22 19:48:37.948666 | orchestrator | 2025-06-22 19:48:37.949011 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-06-22 19:48:37.949375 | orchestrator | 2025-06-22 19:48:37.949805 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-22 19:48:37.950265 | orchestrator | Sunday 22 June 2025 19:48:37 +0000 (0:00:00.280) 0:00:23.610 *********** 2025-06-22 19:48:38.151074 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-06-22 19:48:38.151732 | orchestrator | 2025-06-22 19:48:38.152435 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-22 19:48:38.154009 | orchestrator | Sunday 22 June 2025 19:48:38 +0000 (0:00:00.221) 0:00:23.832 *********** 2025-06-22 19:48:38.361364 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:48:38.361450 | orchestrator | 2025-06-22 19:48:38.362414 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:48:38.363144 | orchestrator | Sunday 22 June 2025 19:48:38 +0000 (0:00:00.210) 0:00:24.042 *********** 2025-06-22 19:48:38.732057 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-06-22 19:48:38.733058 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-06-22 19:48:38.734356 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-06-22 19:48:38.735773 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-06-22 19:48:38.737202 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-06-22 19:48:38.738212 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-06-22 19:48:38.739249 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-06-22 19:48:38.740153 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-06-22 19:48:38.740953 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-06-22 19:48:38.742297 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-06-22 19:48:38.742972 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-06-22 19:48:38.743820 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-06-22 19:48:38.744050 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-06-22 19:48:38.744633 | orchestrator | 2025-06-22 19:48:38.745128 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:48:38.745696 | orchestrator | Sunday 22 June 2025 19:48:38 +0000 (0:00:00.372) 0:00:24.414 *********** 2025-06-22 19:48:38.927895 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:48:38.927996 | orchestrator | 2025-06-22 19:48:38.928541 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:48:38.929442 | orchestrator | Sunday 22 June 2025 19:48:38 +0000 (0:00:00.193) 0:00:24.608 *********** 2025-06-22 19:48:39.100058 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:48:39.103782 | orchestrator | 2025-06-22 19:48:39.105058 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:48:39.105565 | orchestrator | Sunday 22 June 2025 19:48:39 +0000 (0:00:00.174) 0:00:24.782 *********** 2025-06-22 19:48:39.273759 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:48:39.275291 | orchestrator | 2025-06-22 19:48:39.278470 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:48:39.278958 | orchestrator | Sunday 22 June 2025 19:48:39 +0000 (0:00:00.174) 0:00:24.956 *********** 2025-06-22 19:48:39.750203 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:48:39.750790 | orchestrator | 2025-06-22 19:48:39.751874 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:48:39.752404 | orchestrator | Sunday 22 June 2025 19:48:39 +0000 (0:00:00.475) 0:00:25.432 *********** 2025-06-22 19:48:39.946158 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:48:39.948358 | orchestrator | 2025-06-22 19:48:39.948398 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:48:39.948847 | orchestrator | Sunday 22 June 2025 19:48:39 +0000 (0:00:00.194) 0:00:25.627 *********** 2025-06-22 19:48:40.132952 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:48:40.133413 | orchestrator | 2025-06-22 19:48:40.134095 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:48:40.134836 | orchestrator | Sunday 22 June 2025 19:48:40 +0000 (0:00:00.187) 0:00:25.814 *********** 2025-06-22 19:48:40.314392 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:48:40.315905 | orchestrator | 2025-06-22 19:48:40.316682 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:48:40.317794 | orchestrator | Sunday 22 June 2025 19:48:40 +0000 (0:00:00.181) 0:00:25.996 *********** 2025-06-22 19:48:40.496590 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:48:40.496760 | orchestrator | 2025-06-22 19:48:40.497655 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:48:40.498413 | orchestrator | Sunday 22 June 2025 19:48:40 +0000 (0:00:00.182) 0:00:26.178 *********** 2025-06-22 19:48:40.875558 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_81f2e499-4268-4bd5-a5ff-46d49ba2fab9) 2025-06-22 19:48:40.875738 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_81f2e499-4268-4bd5-a5ff-46d49ba2fab9) 2025-06-22 19:48:40.876599 | orchestrator | 2025-06-22 19:48:40.877296 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:48:40.877917 | orchestrator | Sunday 22 June 2025 19:48:40 +0000 (0:00:00.377) 0:00:26.556 *********** 2025-06-22 19:48:41.263308 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_1702d6d9-f6d5-467e-9c44-3c93c3ac891d) 2025-06-22 19:48:41.264260 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_1702d6d9-f6d5-467e-9c44-3c93c3ac891d) 2025-06-22 19:48:41.265280 | orchestrator | 2025-06-22 19:48:41.265893 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:48:41.266516 | orchestrator | Sunday 22 June 2025 19:48:41 +0000 (0:00:00.389) 0:00:26.945 *********** 2025-06-22 19:48:41.687031 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_a49b6e77-acd0-4f36-887b-4e4ec75cdfa4) 2025-06-22 19:48:41.687561 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_a49b6e77-acd0-4f36-887b-4e4ec75cdfa4) 2025-06-22 19:48:41.687900 | orchestrator | 2025-06-22 19:48:41.688399 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:48:41.688808 | orchestrator | Sunday 22 June 2025 19:48:41 +0000 (0:00:00.424) 0:00:27.370 *********** 2025-06-22 19:48:42.084207 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_bbdef6ad-891d-4656-ac9b-bc24d19b561e) 2025-06-22 19:48:42.086244 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_bbdef6ad-891d-4656-ac9b-bc24d19b561e) 2025-06-22 19:48:42.088096 | orchestrator | 2025-06-22 19:48:42.089224 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:48:42.089890 | orchestrator | Sunday 22 June 2025 19:48:42 +0000 (0:00:00.393) 0:00:27.764 *********** 2025-06-22 19:48:42.391998 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-22 19:48:42.392202 | orchestrator | 2025-06-22 19:48:42.392487 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:48:42.393245 | orchestrator | Sunday 22 June 2025 19:48:42 +0000 (0:00:00.306) 0:00:28.071 *********** 2025-06-22 19:48:42.912317 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-06-22 19:48:42.912440 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-06-22 19:48:42.912543 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-06-22 19:48:42.913082 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-06-22 19:48:42.913282 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-06-22 19:48:42.914204 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-06-22 19:48:42.914577 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-06-22 19:48:42.914923 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-06-22 19:48:42.915554 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-06-22 19:48:42.915966 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-06-22 19:48:42.916545 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-06-22 19:48:42.916758 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-06-22 19:48:42.916915 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-06-22 19:48:42.917408 | orchestrator | 2025-06-22 19:48:42.917668 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:48:42.918506 | orchestrator | Sunday 22 June 2025 19:48:42 +0000 (0:00:00.519) 0:00:28.590 *********** 2025-06-22 19:48:43.091971 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:48:43.092054 | orchestrator | 2025-06-22 19:48:43.093795 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:48:43.094495 | orchestrator | Sunday 22 June 2025 19:48:43 +0000 (0:00:00.182) 0:00:28.773 *********** 2025-06-22 19:48:43.284555 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:48:43.284704 | orchestrator | 2025-06-22 19:48:43.286106 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:48:43.286600 | orchestrator | Sunday 22 June 2025 19:48:43 +0000 (0:00:00.193) 0:00:28.966 *********** 2025-06-22 19:48:43.505294 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:48:43.506938 | orchestrator | 2025-06-22 19:48:43.507292 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:48:43.508910 | orchestrator | Sunday 22 June 2025 19:48:43 +0000 (0:00:00.221) 0:00:29.188 *********** 2025-06-22 19:48:43.691017 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:48:43.691454 | orchestrator | 2025-06-22 19:48:43.692715 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:48:43.694288 | orchestrator | Sunday 22 June 2025 19:48:43 +0000 (0:00:00.185) 0:00:29.373 *********** 2025-06-22 19:48:43.871157 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:48:43.871241 | orchestrator | 2025-06-22 19:48:43.875187 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:48:43.875552 | orchestrator | Sunday 22 June 2025 19:48:43 +0000 (0:00:00.176) 0:00:29.549 *********** 2025-06-22 19:48:44.065393 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:48:44.066235 | orchestrator | 2025-06-22 19:48:44.068942 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:48:44.071243 | orchestrator | Sunday 22 June 2025 19:48:44 +0000 (0:00:00.194) 0:00:29.744 *********** 2025-06-22 19:48:44.269761 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:48:44.270867 | orchestrator | 2025-06-22 19:48:44.271831 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:48:44.273038 | orchestrator | Sunday 22 June 2025 19:48:44 +0000 (0:00:00.208) 0:00:29.952 *********** 2025-06-22 19:48:44.465777 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:48:44.467332 | orchestrator | 2025-06-22 19:48:44.468241 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:48:44.469303 | orchestrator | Sunday 22 June 2025 19:48:44 +0000 (0:00:00.193) 0:00:30.145 *********** 2025-06-22 19:48:45.308729 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-06-22 19:48:45.309433 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-06-22 19:48:45.310440 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-06-22 19:48:45.311362 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-06-22 19:48:45.312003 | orchestrator | 2025-06-22 19:48:45.312799 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:48:45.313501 | orchestrator | Sunday 22 June 2025 19:48:45 +0000 (0:00:00.842) 0:00:30.988 *********** 2025-06-22 19:48:45.494337 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:48:45.494509 | orchestrator | 2025-06-22 19:48:45.495620 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:48:45.499107 | orchestrator | Sunday 22 June 2025 19:48:45 +0000 (0:00:00.188) 0:00:31.176 *********** 2025-06-22 19:48:45.699539 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:48:45.699631 | orchestrator | 2025-06-22 19:48:45.700200 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:48:45.700987 | orchestrator | Sunday 22 June 2025 19:48:45 +0000 (0:00:00.204) 0:00:31.381 *********** 2025-06-22 19:48:46.223177 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:48:46.224575 | orchestrator | 2025-06-22 19:48:46.225307 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:48:46.225876 | orchestrator | Sunday 22 June 2025 19:48:46 +0000 (0:00:00.521) 0:00:31.902 *********** 2025-06-22 19:48:46.419100 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:48:46.419790 | orchestrator | 2025-06-22 19:48:46.420650 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-06-22 19:48:46.421557 | orchestrator | Sunday 22 June 2025 19:48:46 +0000 (0:00:00.197) 0:00:32.100 *********** 2025-06-22 19:48:46.564274 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:48:46.564936 | orchestrator | 2025-06-22 19:48:46.565773 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-06-22 19:48:46.566419 | orchestrator | Sunday 22 June 2025 19:48:46 +0000 (0:00:00.143) 0:00:32.244 *********** 2025-06-22 19:48:46.734000 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '420ac1c2-ff56-5c56-8dd6-abe068aa03ad'}}) 2025-06-22 19:48:46.734712 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '21b37dc5-48e7-5a6c-9835-121dab35d047'}}) 2025-06-22 19:48:46.734830 | orchestrator | 2025-06-22 19:48:46.735077 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-06-22 19:48:46.735729 | orchestrator | Sunday 22 June 2025 19:48:46 +0000 (0:00:00.173) 0:00:32.417 *********** 2025-06-22 19:48:48.658982 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-420ac1c2-ff56-5c56-8dd6-abe068aa03ad', 'data_vg': 'ceph-420ac1c2-ff56-5c56-8dd6-abe068aa03ad'}) 2025-06-22 19:48:48.660006 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-21b37dc5-48e7-5a6c-9835-121dab35d047', 'data_vg': 'ceph-21b37dc5-48e7-5a6c-9835-121dab35d047'}) 2025-06-22 19:48:48.661320 | orchestrator | 2025-06-22 19:48:48.663352 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-06-22 19:48:48.664152 | orchestrator | Sunday 22 June 2025 19:48:48 +0000 (0:00:01.921) 0:00:34.338 *********** 2025-06-22 19:48:48.819811 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-420ac1c2-ff56-5c56-8dd6-abe068aa03ad', 'data_vg': 'ceph-420ac1c2-ff56-5c56-8dd6-abe068aa03ad'})  2025-06-22 19:48:48.819892 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-21b37dc5-48e7-5a6c-9835-121dab35d047', 'data_vg': 'ceph-21b37dc5-48e7-5a6c-9835-121dab35d047'})  2025-06-22 19:48:48.820627 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:48:48.821289 | orchestrator | 2025-06-22 19:48:48.822190 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-06-22 19:48:48.822789 | orchestrator | Sunday 22 June 2025 19:48:48 +0000 (0:00:00.159) 0:00:34.498 *********** 2025-06-22 19:48:50.132436 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-420ac1c2-ff56-5c56-8dd6-abe068aa03ad', 'data_vg': 'ceph-420ac1c2-ff56-5c56-8dd6-abe068aa03ad'}) 2025-06-22 19:48:50.133589 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-21b37dc5-48e7-5a6c-9835-121dab35d047', 'data_vg': 'ceph-21b37dc5-48e7-5a6c-9835-121dab35d047'}) 2025-06-22 19:48:50.134741 | orchestrator | 2025-06-22 19:48:50.135910 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-06-22 19:48:50.137227 | orchestrator | Sunday 22 June 2025 19:48:50 +0000 (0:00:01.313) 0:00:35.812 *********** 2025-06-22 19:48:50.304447 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-420ac1c2-ff56-5c56-8dd6-abe068aa03ad', 'data_vg': 'ceph-420ac1c2-ff56-5c56-8dd6-abe068aa03ad'})  2025-06-22 19:48:50.305008 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-21b37dc5-48e7-5a6c-9835-121dab35d047', 'data_vg': 'ceph-21b37dc5-48e7-5a6c-9835-121dab35d047'})  2025-06-22 19:48:50.306157 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:48:50.307204 | orchestrator | 2025-06-22 19:48:50.308064 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-06-22 19:48:50.308848 | orchestrator | Sunday 22 June 2025 19:48:50 +0000 (0:00:00.171) 0:00:35.983 *********** 2025-06-22 19:48:50.440109 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:48:50.440860 | orchestrator | 2025-06-22 19:48:50.441926 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-06-22 19:48:50.442765 | orchestrator | Sunday 22 June 2025 19:48:50 +0000 (0:00:00.138) 0:00:36.121 *********** 2025-06-22 19:48:50.596310 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-420ac1c2-ff56-5c56-8dd6-abe068aa03ad', 'data_vg': 'ceph-420ac1c2-ff56-5c56-8dd6-abe068aa03ad'})  2025-06-22 19:48:50.597479 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-21b37dc5-48e7-5a6c-9835-121dab35d047', 'data_vg': 'ceph-21b37dc5-48e7-5a6c-9835-121dab35d047'})  2025-06-22 19:48:50.599639 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:48:50.600443 | orchestrator | 2025-06-22 19:48:50.601605 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-06-22 19:48:50.602620 | orchestrator | Sunday 22 June 2025 19:48:50 +0000 (0:00:00.156) 0:00:36.278 *********** 2025-06-22 19:48:50.748909 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:48:50.750957 | orchestrator | 2025-06-22 19:48:50.753321 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-06-22 19:48:50.754541 | orchestrator | Sunday 22 June 2025 19:48:50 +0000 (0:00:00.152) 0:00:36.430 *********** 2025-06-22 19:48:50.884523 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-420ac1c2-ff56-5c56-8dd6-abe068aa03ad', 'data_vg': 'ceph-420ac1c2-ff56-5c56-8dd6-abe068aa03ad'})  2025-06-22 19:48:50.885870 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-21b37dc5-48e7-5a6c-9835-121dab35d047', 'data_vg': 'ceph-21b37dc5-48e7-5a6c-9835-121dab35d047'})  2025-06-22 19:48:50.887101 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:48:50.887975 | orchestrator | 2025-06-22 19:48:50.888763 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-06-22 19:48:50.889422 | orchestrator | Sunday 22 June 2025 19:48:50 +0000 (0:00:00.132) 0:00:36.563 *********** 2025-06-22 19:48:51.157933 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:48:51.159240 | orchestrator | 2025-06-22 19:48:51.159801 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-06-22 19:48:51.161306 | orchestrator | Sunday 22 June 2025 19:48:51 +0000 (0:00:00.276) 0:00:36.839 *********** 2025-06-22 19:48:51.280812 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-420ac1c2-ff56-5c56-8dd6-abe068aa03ad', 'data_vg': 'ceph-420ac1c2-ff56-5c56-8dd6-abe068aa03ad'})  2025-06-22 19:48:51.281682 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-21b37dc5-48e7-5a6c-9835-121dab35d047', 'data_vg': 'ceph-21b37dc5-48e7-5a6c-9835-121dab35d047'})  2025-06-22 19:48:51.283877 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:48:51.283926 | orchestrator | 2025-06-22 19:48:51.283940 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-06-22 19:48:51.283953 | orchestrator | Sunday 22 June 2025 19:48:51 +0000 (0:00:00.123) 0:00:36.963 *********** 2025-06-22 19:48:51.408908 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:48:51.409822 | orchestrator | 2025-06-22 19:48:51.410555 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-06-22 19:48:51.411928 | orchestrator | Sunday 22 June 2025 19:48:51 +0000 (0:00:00.127) 0:00:37.091 *********** 2025-06-22 19:48:51.562637 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-420ac1c2-ff56-5c56-8dd6-abe068aa03ad', 'data_vg': 'ceph-420ac1c2-ff56-5c56-8dd6-abe068aa03ad'})  2025-06-22 19:48:51.563765 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-21b37dc5-48e7-5a6c-9835-121dab35d047', 'data_vg': 'ceph-21b37dc5-48e7-5a6c-9835-121dab35d047'})  2025-06-22 19:48:51.565215 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:48:51.566199 | orchestrator | 2025-06-22 19:48:51.567396 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-06-22 19:48:51.568072 | orchestrator | Sunday 22 June 2025 19:48:51 +0000 (0:00:00.152) 0:00:37.243 *********** 2025-06-22 19:48:51.730251 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-420ac1c2-ff56-5c56-8dd6-abe068aa03ad', 'data_vg': 'ceph-420ac1c2-ff56-5c56-8dd6-abe068aa03ad'})  2025-06-22 19:48:51.731049 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-21b37dc5-48e7-5a6c-9835-121dab35d047', 'data_vg': 'ceph-21b37dc5-48e7-5a6c-9835-121dab35d047'})  2025-06-22 19:48:51.732821 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:48:51.735150 | orchestrator | 2025-06-22 19:48:51.736209 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-06-22 19:48:51.737229 | orchestrator | Sunday 22 June 2025 19:48:51 +0000 (0:00:00.167) 0:00:37.411 *********** 2025-06-22 19:48:51.899344 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-420ac1c2-ff56-5c56-8dd6-abe068aa03ad', 'data_vg': 'ceph-420ac1c2-ff56-5c56-8dd6-abe068aa03ad'})  2025-06-22 19:48:51.901313 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-21b37dc5-48e7-5a6c-9835-121dab35d047', 'data_vg': 'ceph-21b37dc5-48e7-5a6c-9835-121dab35d047'})  2025-06-22 19:48:51.901912 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:48:51.902869 | orchestrator | 2025-06-22 19:48:51.904512 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-06-22 19:48:51.904852 | orchestrator | Sunday 22 June 2025 19:48:51 +0000 (0:00:00.168) 0:00:37.579 *********** 2025-06-22 19:48:52.047036 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:48:52.047804 | orchestrator | 2025-06-22 19:48:52.048722 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-06-22 19:48:52.049840 | orchestrator | Sunday 22 June 2025 19:48:52 +0000 (0:00:00.146) 0:00:37.726 *********** 2025-06-22 19:48:52.193742 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:48:52.193929 | orchestrator | 2025-06-22 19:48:52.195254 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-06-22 19:48:52.197459 | orchestrator | Sunday 22 June 2025 19:48:52 +0000 (0:00:00.146) 0:00:37.873 *********** 2025-06-22 19:48:52.342137 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:48:52.342468 | orchestrator | 2025-06-22 19:48:52.343909 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-06-22 19:48:52.344421 | orchestrator | Sunday 22 June 2025 19:48:52 +0000 (0:00:00.150) 0:00:38.024 *********** 2025-06-22 19:48:52.518779 | orchestrator | ok: [testbed-node-4] => { 2025-06-22 19:48:52.520808 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-06-22 19:48:52.520894 | orchestrator | } 2025-06-22 19:48:52.522393 | orchestrator | 2025-06-22 19:48:52.526576 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-06-22 19:48:52.527101 | orchestrator | Sunday 22 June 2025 19:48:52 +0000 (0:00:00.174) 0:00:38.198 *********** 2025-06-22 19:48:52.666451 | orchestrator | ok: [testbed-node-4] => { 2025-06-22 19:48:52.667670 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-06-22 19:48:52.668935 | orchestrator | } 2025-06-22 19:48:52.670689 | orchestrator | 2025-06-22 19:48:52.670767 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-06-22 19:48:52.671378 | orchestrator | Sunday 22 June 2025 19:48:52 +0000 (0:00:00.149) 0:00:38.348 *********** 2025-06-22 19:48:52.830251 | orchestrator | ok: [testbed-node-4] => { 2025-06-22 19:48:52.831585 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-06-22 19:48:52.835022 | orchestrator | } 2025-06-22 19:48:52.835824 | orchestrator | 2025-06-22 19:48:52.836542 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-06-22 19:48:52.837584 | orchestrator | Sunday 22 June 2025 19:48:52 +0000 (0:00:00.162) 0:00:38.511 *********** 2025-06-22 19:48:53.601938 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:48:53.602295 | orchestrator | 2025-06-22 19:48:53.603527 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-06-22 19:48:53.604328 | orchestrator | Sunday 22 June 2025 19:48:53 +0000 (0:00:00.771) 0:00:39.283 *********** 2025-06-22 19:48:54.110055 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:48:54.110295 | orchestrator | 2025-06-22 19:48:54.111640 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-06-22 19:48:54.112166 | orchestrator | Sunday 22 June 2025 19:48:54 +0000 (0:00:00.506) 0:00:39.789 *********** 2025-06-22 19:48:54.645477 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:48:54.646790 | orchestrator | 2025-06-22 19:48:54.647939 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-06-22 19:48:54.648818 | orchestrator | Sunday 22 June 2025 19:48:54 +0000 (0:00:00.535) 0:00:40.324 *********** 2025-06-22 19:48:54.801386 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:48:54.802708 | orchestrator | 2025-06-22 19:48:54.804185 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-06-22 19:48:54.805208 | orchestrator | Sunday 22 June 2025 19:48:54 +0000 (0:00:00.158) 0:00:40.482 *********** 2025-06-22 19:48:54.922389 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:48:54.923330 | orchestrator | 2025-06-22 19:48:54.924639 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-06-22 19:48:54.925826 | orchestrator | Sunday 22 June 2025 19:48:54 +0000 (0:00:00.120) 0:00:40.603 *********** 2025-06-22 19:48:55.045488 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:48:55.046225 | orchestrator | 2025-06-22 19:48:55.047305 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-06-22 19:48:55.048241 | orchestrator | Sunday 22 June 2025 19:48:55 +0000 (0:00:00.123) 0:00:40.726 *********** 2025-06-22 19:48:55.193334 | orchestrator | ok: [testbed-node-4] => { 2025-06-22 19:48:55.194107 | orchestrator |  "vgs_report": { 2025-06-22 19:48:55.194698 | orchestrator |  "vg": [] 2025-06-22 19:48:55.196741 | orchestrator |  } 2025-06-22 19:48:55.197367 | orchestrator | } 2025-06-22 19:48:55.197872 | orchestrator | 2025-06-22 19:48:55.198837 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-06-22 19:48:55.199172 | orchestrator | Sunday 22 June 2025 19:48:55 +0000 (0:00:00.147) 0:00:40.874 *********** 2025-06-22 19:48:55.360592 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:48:55.361265 | orchestrator | 2025-06-22 19:48:55.363098 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-06-22 19:48:55.364579 | orchestrator | Sunday 22 June 2025 19:48:55 +0000 (0:00:00.167) 0:00:41.041 *********** 2025-06-22 19:48:55.525937 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:48:55.526682 | orchestrator | 2025-06-22 19:48:55.529330 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-06-22 19:48:55.533433 | orchestrator | Sunday 22 June 2025 19:48:55 +0000 (0:00:00.164) 0:00:41.206 *********** 2025-06-22 19:48:55.671569 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:48:55.671667 | orchestrator | 2025-06-22 19:48:55.672709 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-06-22 19:48:55.673546 | orchestrator | Sunday 22 June 2025 19:48:55 +0000 (0:00:00.145) 0:00:41.352 *********** 2025-06-22 19:48:55.825702 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:48:55.825857 | orchestrator | 2025-06-22 19:48:55.827162 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-06-22 19:48:55.828694 | orchestrator | Sunday 22 June 2025 19:48:55 +0000 (0:00:00.154) 0:00:41.506 *********** 2025-06-22 19:48:55.995427 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:48:55.995529 | orchestrator | 2025-06-22 19:48:55.995546 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-06-22 19:48:55.995820 | orchestrator | Sunday 22 June 2025 19:48:55 +0000 (0:00:00.164) 0:00:41.671 *********** 2025-06-22 19:48:56.365163 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:48:56.366621 | orchestrator | 2025-06-22 19:48:56.366993 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-06-22 19:48:56.368749 | orchestrator | Sunday 22 June 2025 19:48:56 +0000 (0:00:00.375) 0:00:42.047 *********** 2025-06-22 19:48:56.525990 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:48:56.527389 | orchestrator | 2025-06-22 19:48:56.529223 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-06-22 19:48:56.531089 | orchestrator | Sunday 22 June 2025 19:48:56 +0000 (0:00:00.159) 0:00:42.206 *********** 2025-06-22 19:48:56.669457 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:48:56.670106 | orchestrator | 2025-06-22 19:48:56.671591 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-06-22 19:48:56.672682 | orchestrator | Sunday 22 June 2025 19:48:56 +0000 (0:00:00.143) 0:00:42.350 *********** 2025-06-22 19:48:56.827924 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:48:56.828016 | orchestrator | 2025-06-22 19:48:56.828026 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-06-22 19:48:56.828036 | orchestrator | Sunday 22 June 2025 19:48:56 +0000 (0:00:00.152) 0:00:42.502 *********** 2025-06-22 19:48:56.960770 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:48:56.962602 | orchestrator | 2025-06-22 19:48:56.963762 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-06-22 19:48:56.964156 | orchestrator | Sunday 22 June 2025 19:48:56 +0000 (0:00:00.138) 0:00:42.641 *********** 2025-06-22 19:48:57.110929 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:48:57.111525 | orchestrator | 2025-06-22 19:48:57.112801 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-06-22 19:48:57.114094 | orchestrator | Sunday 22 June 2025 19:48:57 +0000 (0:00:00.150) 0:00:42.791 *********** 2025-06-22 19:48:57.271291 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:48:57.272382 | orchestrator | 2025-06-22 19:48:57.273936 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-06-22 19:48:57.274942 | orchestrator | Sunday 22 June 2025 19:48:57 +0000 (0:00:00.160) 0:00:42.952 *********** 2025-06-22 19:48:57.406163 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:48:57.406355 | orchestrator | 2025-06-22 19:48:57.407059 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-06-22 19:48:57.407862 | orchestrator | Sunday 22 June 2025 19:48:57 +0000 (0:00:00.135) 0:00:43.088 *********** 2025-06-22 19:48:57.555536 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:48:57.555749 | orchestrator | 2025-06-22 19:48:57.556594 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-06-22 19:48:57.557216 | orchestrator | Sunday 22 June 2025 19:48:57 +0000 (0:00:00.149) 0:00:43.237 *********** 2025-06-22 19:48:57.707755 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-420ac1c2-ff56-5c56-8dd6-abe068aa03ad', 'data_vg': 'ceph-420ac1c2-ff56-5c56-8dd6-abe068aa03ad'})  2025-06-22 19:48:57.707970 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-21b37dc5-48e7-5a6c-9835-121dab35d047', 'data_vg': 'ceph-21b37dc5-48e7-5a6c-9835-121dab35d047'})  2025-06-22 19:48:57.708890 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:48:57.711363 | orchestrator | 2025-06-22 19:48:57.712162 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-06-22 19:48:57.712831 | orchestrator | Sunday 22 June 2025 19:48:57 +0000 (0:00:00.151) 0:00:43.388 *********** 2025-06-22 19:48:57.862087 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-420ac1c2-ff56-5c56-8dd6-abe068aa03ad', 'data_vg': 'ceph-420ac1c2-ff56-5c56-8dd6-abe068aa03ad'})  2025-06-22 19:48:57.863029 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-21b37dc5-48e7-5a6c-9835-121dab35d047', 'data_vg': 'ceph-21b37dc5-48e7-5a6c-9835-121dab35d047'})  2025-06-22 19:48:57.864019 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:48:57.865363 | orchestrator | 2025-06-22 19:48:57.866416 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-06-22 19:48:57.866800 | orchestrator | Sunday 22 June 2025 19:48:57 +0000 (0:00:00.154) 0:00:43.543 *********** 2025-06-22 19:48:58.007327 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-420ac1c2-ff56-5c56-8dd6-abe068aa03ad', 'data_vg': 'ceph-420ac1c2-ff56-5c56-8dd6-abe068aa03ad'})  2025-06-22 19:48:58.009339 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-21b37dc5-48e7-5a6c-9835-121dab35d047', 'data_vg': 'ceph-21b37dc5-48e7-5a6c-9835-121dab35d047'})  2025-06-22 19:48:58.010292 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:48:58.011729 | orchestrator | 2025-06-22 19:48:58.013047 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-06-22 19:48:58.014432 | orchestrator | Sunday 22 June 2025 19:48:57 +0000 (0:00:00.143) 0:00:43.686 *********** 2025-06-22 19:48:58.394491 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-420ac1c2-ff56-5c56-8dd6-abe068aa03ad', 'data_vg': 'ceph-420ac1c2-ff56-5c56-8dd6-abe068aa03ad'})  2025-06-22 19:48:58.395670 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-21b37dc5-48e7-5a6c-9835-121dab35d047', 'data_vg': 'ceph-21b37dc5-48e7-5a6c-9835-121dab35d047'})  2025-06-22 19:48:58.396619 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:48:58.397366 | orchestrator | 2025-06-22 19:48:58.398227 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-06-22 19:48:58.398956 | orchestrator | Sunday 22 June 2025 19:48:58 +0000 (0:00:00.389) 0:00:44.076 *********** 2025-06-22 19:48:58.579062 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-420ac1c2-ff56-5c56-8dd6-abe068aa03ad', 'data_vg': 'ceph-420ac1c2-ff56-5c56-8dd6-abe068aa03ad'})  2025-06-22 19:48:58.580029 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-21b37dc5-48e7-5a6c-9835-121dab35d047', 'data_vg': 'ceph-21b37dc5-48e7-5a6c-9835-121dab35d047'})  2025-06-22 19:48:58.580893 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:48:58.581613 | orchestrator | 2025-06-22 19:48:58.582601 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-06-22 19:48:58.583392 | orchestrator | Sunday 22 June 2025 19:48:58 +0000 (0:00:00.185) 0:00:44.261 *********** 2025-06-22 19:48:58.743925 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-420ac1c2-ff56-5c56-8dd6-abe068aa03ad', 'data_vg': 'ceph-420ac1c2-ff56-5c56-8dd6-abe068aa03ad'})  2025-06-22 19:48:58.744451 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-21b37dc5-48e7-5a6c-9835-121dab35d047', 'data_vg': 'ceph-21b37dc5-48e7-5a6c-9835-121dab35d047'})  2025-06-22 19:48:58.745310 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:48:58.746354 | orchestrator | 2025-06-22 19:48:58.747048 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-06-22 19:48:58.747725 | orchestrator | Sunday 22 June 2025 19:48:58 +0000 (0:00:00.161) 0:00:44.423 *********** 2025-06-22 19:48:58.909462 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-420ac1c2-ff56-5c56-8dd6-abe068aa03ad', 'data_vg': 'ceph-420ac1c2-ff56-5c56-8dd6-abe068aa03ad'})  2025-06-22 19:48:58.909929 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-21b37dc5-48e7-5a6c-9835-121dab35d047', 'data_vg': 'ceph-21b37dc5-48e7-5a6c-9835-121dab35d047'})  2025-06-22 19:48:58.911417 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:48:58.912802 | orchestrator | 2025-06-22 19:48:58.913426 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-06-22 19:48:58.914342 | orchestrator | Sunday 22 June 2025 19:48:58 +0000 (0:00:00.165) 0:00:44.588 *********** 2025-06-22 19:48:59.068671 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-420ac1c2-ff56-5c56-8dd6-abe068aa03ad', 'data_vg': 'ceph-420ac1c2-ff56-5c56-8dd6-abe068aa03ad'})  2025-06-22 19:48:59.068816 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-21b37dc5-48e7-5a6c-9835-121dab35d047', 'data_vg': 'ceph-21b37dc5-48e7-5a6c-9835-121dab35d047'})  2025-06-22 19:48:59.070179 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:48:59.070991 | orchestrator | 2025-06-22 19:48:59.071954 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-06-22 19:48:59.072447 | orchestrator | Sunday 22 June 2025 19:48:59 +0000 (0:00:00.156) 0:00:44.745 *********** 2025-06-22 19:48:59.608807 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:48:59.609609 | orchestrator | 2025-06-22 19:48:59.609996 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-06-22 19:48:59.611160 | orchestrator | Sunday 22 June 2025 19:48:59 +0000 (0:00:00.544) 0:00:45.290 *********** 2025-06-22 19:49:00.178872 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:49:00.179064 | orchestrator | 2025-06-22 19:49:00.179820 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-06-22 19:49:00.180064 | orchestrator | Sunday 22 June 2025 19:49:00 +0000 (0:00:00.568) 0:00:45.858 *********** 2025-06-22 19:49:00.329180 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:49:00.329285 | orchestrator | 2025-06-22 19:49:00.329310 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-06-22 19:49:00.329760 | orchestrator | Sunday 22 June 2025 19:49:00 +0000 (0:00:00.152) 0:00:46.011 *********** 2025-06-22 19:49:00.516886 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-21b37dc5-48e7-5a6c-9835-121dab35d047', 'vg_name': 'ceph-21b37dc5-48e7-5a6c-9835-121dab35d047'}) 2025-06-22 19:49:00.518011 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-420ac1c2-ff56-5c56-8dd6-abe068aa03ad', 'vg_name': 'ceph-420ac1c2-ff56-5c56-8dd6-abe068aa03ad'}) 2025-06-22 19:49:00.518912 | orchestrator | 2025-06-22 19:49:00.520404 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-06-22 19:49:00.521221 | orchestrator | Sunday 22 June 2025 19:49:00 +0000 (0:00:00.186) 0:00:46.197 *********** 2025-06-22 19:49:00.689995 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-420ac1c2-ff56-5c56-8dd6-abe068aa03ad', 'data_vg': 'ceph-420ac1c2-ff56-5c56-8dd6-abe068aa03ad'})  2025-06-22 19:49:00.691075 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-21b37dc5-48e7-5a6c-9835-121dab35d047', 'data_vg': 'ceph-21b37dc5-48e7-5a6c-9835-121dab35d047'})  2025-06-22 19:49:00.693437 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:49:00.694501 | orchestrator | 2025-06-22 19:49:00.695338 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-06-22 19:49:00.695964 | orchestrator | Sunday 22 June 2025 19:49:00 +0000 (0:00:00.173) 0:00:46.371 *********** 2025-06-22 19:49:00.861940 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-420ac1c2-ff56-5c56-8dd6-abe068aa03ad', 'data_vg': 'ceph-420ac1c2-ff56-5c56-8dd6-abe068aa03ad'})  2025-06-22 19:49:00.862782 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-21b37dc5-48e7-5a6c-9835-121dab35d047', 'data_vg': 'ceph-21b37dc5-48e7-5a6c-9835-121dab35d047'})  2025-06-22 19:49:00.863464 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:49:00.865743 | orchestrator | 2025-06-22 19:49:00.865777 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-06-22 19:49:00.865792 | orchestrator | Sunday 22 June 2025 19:49:00 +0000 (0:00:00.172) 0:00:46.543 *********** 2025-06-22 19:49:01.020096 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-420ac1c2-ff56-5c56-8dd6-abe068aa03ad', 'data_vg': 'ceph-420ac1c2-ff56-5c56-8dd6-abe068aa03ad'})  2025-06-22 19:49:01.020318 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-21b37dc5-48e7-5a6c-9835-121dab35d047', 'data_vg': 'ceph-21b37dc5-48e7-5a6c-9835-121dab35d047'})  2025-06-22 19:49:01.022253 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:49:01.023523 | orchestrator | 2025-06-22 19:49:01.024847 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-06-22 19:49:01.026350 | orchestrator | Sunday 22 June 2025 19:49:01 +0000 (0:00:00.157) 0:00:46.701 *********** 2025-06-22 19:49:01.522558 | orchestrator | ok: [testbed-node-4] => { 2025-06-22 19:49:01.522780 | orchestrator |  "lvm_report": { 2025-06-22 19:49:01.524614 | orchestrator |  "lv": [ 2025-06-22 19:49:01.530931 | orchestrator |  { 2025-06-22 19:49:01.531004 | orchestrator |  "lv_name": "osd-block-21b37dc5-48e7-5a6c-9835-121dab35d047", 2025-06-22 19:49:01.531027 | orchestrator |  "vg_name": "ceph-21b37dc5-48e7-5a6c-9835-121dab35d047" 2025-06-22 19:49:01.531048 | orchestrator |  }, 2025-06-22 19:49:01.531068 | orchestrator |  { 2025-06-22 19:49:01.531086 | orchestrator |  "lv_name": "osd-block-420ac1c2-ff56-5c56-8dd6-abe068aa03ad", 2025-06-22 19:49:01.531106 | orchestrator |  "vg_name": "ceph-420ac1c2-ff56-5c56-8dd6-abe068aa03ad" 2025-06-22 19:49:01.531245 | orchestrator |  } 2025-06-22 19:49:01.533250 | orchestrator |  ], 2025-06-22 19:49:01.536789 | orchestrator |  "pv": [ 2025-06-22 19:49:01.538081 | orchestrator |  { 2025-06-22 19:49:01.538975 | orchestrator |  "pv_name": "/dev/sdb", 2025-06-22 19:49:01.540026 | orchestrator |  "vg_name": "ceph-420ac1c2-ff56-5c56-8dd6-abe068aa03ad" 2025-06-22 19:49:01.540501 | orchestrator |  }, 2025-06-22 19:49:01.541521 | orchestrator |  { 2025-06-22 19:49:01.541970 | orchestrator |  "pv_name": "/dev/sdc", 2025-06-22 19:49:01.542880 | orchestrator |  "vg_name": "ceph-21b37dc5-48e7-5a6c-9835-121dab35d047" 2025-06-22 19:49:01.543701 | orchestrator |  } 2025-06-22 19:49:01.544448 | orchestrator |  ] 2025-06-22 19:49:01.545266 | orchestrator |  } 2025-06-22 19:49:01.546071 | orchestrator | } 2025-06-22 19:49:01.546544 | orchestrator | 2025-06-22 19:49:01.547357 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-06-22 19:49:01.548238 | orchestrator | 2025-06-22 19:49:01.548894 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-22 19:49:01.549521 | orchestrator | Sunday 22 June 2025 19:49:01 +0000 (0:00:00.501) 0:00:47.202 *********** 2025-06-22 19:49:01.764121 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-06-22 19:49:01.764427 | orchestrator | 2025-06-22 19:49:01.765449 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-06-22 19:49:01.766449 | orchestrator | Sunday 22 June 2025 19:49:01 +0000 (0:00:00.241) 0:00:47.444 *********** 2025-06-22 19:49:02.055740 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:49:02.055892 | orchestrator | 2025-06-22 19:49:02.056856 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:49:02.057889 | orchestrator | Sunday 22 June 2025 19:49:02 +0000 (0:00:00.290) 0:00:47.735 *********** 2025-06-22 19:49:02.560942 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-06-22 19:49:02.561877 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-06-22 19:49:02.563403 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-06-22 19:49:02.564370 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-06-22 19:49:02.565367 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-06-22 19:49:02.566474 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-06-22 19:49:02.567517 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-06-22 19:49:02.568298 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-06-22 19:49:02.569372 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-06-22 19:49:02.570162 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-06-22 19:49:02.570798 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-06-22 19:49:02.571328 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-06-22 19:49:02.572612 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-06-22 19:49:02.573620 | orchestrator | 2025-06-22 19:49:02.574176 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:49:02.574840 | orchestrator | Sunday 22 June 2025 19:49:02 +0000 (0:00:00.506) 0:00:48.241 *********** 2025-06-22 19:49:02.769737 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:49:02.770872 | orchestrator | 2025-06-22 19:49:02.771376 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:49:02.772720 | orchestrator | Sunday 22 June 2025 19:49:02 +0000 (0:00:00.209) 0:00:48.451 *********** 2025-06-22 19:49:02.968541 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:49:02.969216 | orchestrator | 2025-06-22 19:49:02.970245 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:49:02.971194 | orchestrator | Sunday 22 June 2025 19:49:02 +0000 (0:00:00.198) 0:00:48.650 *********** 2025-06-22 19:49:03.167028 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:49:03.167328 | orchestrator | 2025-06-22 19:49:03.168817 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:49:03.168986 | orchestrator | Sunday 22 June 2025 19:49:03 +0000 (0:00:00.197) 0:00:48.848 *********** 2025-06-22 19:49:03.361979 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:49:03.362518 | orchestrator | 2025-06-22 19:49:03.363614 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:49:03.363971 | orchestrator | Sunday 22 June 2025 19:49:03 +0000 (0:00:00.195) 0:00:49.043 *********** 2025-06-22 19:49:03.547212 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:49:03.548088 | orchestrator | 2025-06-22 19:49:03.549043 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:49:03.550588 | orchestrator | Sunday 22 June 2025 19:49:03 +0000 (0:00:00.185) 0:00:49.229 *********** 2025-06-22 19:49:04.204475 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:49:04.206440 | orchestrator | 2025-06-22 19:49:04.206485 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:49:04.207440 | orchestrator | Sunday 22 June 2025 19:49:04 +0000 (0:00:00.654) 0:00:49.884 *********** 2025-06-22 19:49:04.454755 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:49:04.455462 | orchestrator | 2025-06-22 19:49:04.456336 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:49:04.457379 | orchestrator | Sunday 22 June 2025 19:49:04 +0000 (0:00:00.251) 0:00:50.135 *********** 2025-06-22 19:49:04.661064 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:49:04.662302 | orchestrator | 2025-06-22 19:49:04.663338 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:49:04.664808 | orchestrator | Sunday 22 June 2025 19:49:04 +0000 (0:00:00.206) 0:00:50.342 *********** 2025-06-22 19:49:05.073357 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_abc8cf8e-645f-44ba-8ef9-2fedd7dd22d1) 2025-06-22 19:49:05.073471 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_abc8cf8e-645f-44ba-8ef9-2fedd7dd22d1) 2025-06-22 19:49:05.073886 | orchestrator | 2025-06-22 19:49:05.074933 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:49:05.075845 | orchestrator | Sunday 22 June 2025 19:49:05 +0000 (0:00:00.410) 0:00:50.753 *********** 2025-06-22 19:49:05.532557 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_b25991b3-37fd-407a-b13b-d136271ca727) 2025-06-22 19:49:05.537321 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_b25991b3-37fd-407a-b13b-d136271ca727) 2025-06-22 19:49:05.537930 | orchestrator | 2025-06-22 19:49:05.538230 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:49:05.538922 | orchestrator | Sunday 22 June 2025 19:49:05 +0000 (0:00:00.460) 0:00:51.214 *********** 2025-06-22 19:49:05.979575 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_71e43d47-057b-4609-853f-9ccf72c5a295) 2025-06-22 19:49:05.980546 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_71e43d47-057b-4609-853f-9ccf72c5a295) 2025-06-22 19:49:05.981623 | orchestrator | 2025-06-22 19:49:05.982643 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:49:05.983429 | orchestrator | Sunday 22 June 2025 19:49:05 +0000 (0:00:00.446) 0:00:51.660 *********** 2025-06-22 19:49:06.421572 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_61868cbd-84da-463e-9017-284301fda41a) 2025-06-22 19:49:06.421964 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_61868cbd-84da-463e-9017-284301fda41a) 2025-06-22 19:49:06.424491 | orchestrator | 2025-06-22 19:49:06.425439 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-06-22 19:49:06.425950 | orchestrator | Sunday 22 June 2025 19:49:06 +0000 (0:00:00.441) 0:00:52.101 *********** 2025-06-22 19:49:06.763129 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-06-22 19:49:06.764377 | orchestrator | 2025-06-22 19:49:06.764710 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:49:06.766682 | orchestrator | Sunday 22 June 2025 19:49:06 +0000 (0:00:00.342) 0:00:52.444 *********** 2025-06-22 19:49:07.155330 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-06-22 19:49:07.156346 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-06-22 19:49:07.157546 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-06-22 19:49:07.158508 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-06-22 19:49:07.159225 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-06-22 19:49:07.160837 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-06-22 19:49:07.161527 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-06-22 19:49:07.163036 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-06-22 19:49:07.163584 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-06-22 19:49:07.166401 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-06-22 19:49:07.167479 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-06-22 19:49:07.167509 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-06-22 19:49:07.167970 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-06-22 19:49:07.168226 | orchestrator | 2025-06-22 19:49:07.168731 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:49:07.169109 | orchestrator | Sunday 22 June 2025 19:49:07 +0000 (0:00:00.391) 0:00:52.835 *********** 2025-06-22 19:49:07.347243 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:49:07.347908 | orchestrator | 2025-06-22 19:49:07.349048 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:49:07.350208 | orchestrator | Sunday 22 June 2025 19:49:07 +0000 (0:00:00.192) 0:00:53.028 *********** 2025-06-22 19:49:07.553456 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:49:07.554103 | orchestrator | 2025-06-22 19:49:07.555193 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:49:07.555949 | orchestrator | Sunday 22 June 2025 19:49:07 +0000 (0:00:00.206) 0:00:53.235 *********** 2025-06-22 19:49:08.177662 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:49:08.177766 | orchestrator | 2025-06-22 19:49:08.179437 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:49:08.180424 | orchestrator | Sunday 22 June 2025 19:49:08 +0000 (0:00:00.621) 0:00:53.857 *********** 2025-06-22 19:49:08.374729 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:49:08.374886 | orchestrator | 2025-06-22 19:49:08.375839 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:49:08.376281 | orchestrator | Sunday 22 June 2025 19:49:08 +0000 (0:00:00.198) 0:00:54.056 *********** 2025-06-22 19:49:08.571965 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:49:08.572061 | orchestrator | 2025-06-22 19:49:08.573238 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:49:08.573757 | orchestrator | Sunday 22 June 2025 19:49:08 +0000 (0:00:00.197) 0:00:54.253 *********** 2025-06-22 19:49:08.779060 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:49:08.779334 | orchestrator | 2025-06-22 19:49:08.781359 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:49:08.782496 | orchestrator | Sunday 22 June 2025 19:49:08 +0000 (0:00:00.206) 0:00:54.460 *********** 2025-06-22 19:49:08.998729 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:49:08.999182 | orchestrator | 2025-06-22 19:49:09.000337 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:49:09.001732 | orchestrator | Sunday 22 June 2025 19:49:08 +0000 (0:00:00.220) 0:00:54.680 *********** 2025-06-22 19:49:09.228424 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:49:09.229573 | orchestrator | 2025-06-22 19:49:09.230673 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:49:09.231722 | orchestrator | Sunday 22 June 2025 19:49:09 +0000 (0:00:00.228) 0:00:54.909 *********** 2025-06-22 19:49:09.841026 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-06-22 19:49:09.841125 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-06-22 19:49:09.841643 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-06-22 19:49:09.842685 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-06-22 19:49:09.843631 | orchestrator | 2025-06-22 19:49:09.844371 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:49:09.844829 | orchestrator | Sunday 22 June 2025 19:49:09 +0000 (0:00:00.613) 0:00:55.522 *********** 2025-06-22 19:49:10.037910 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:49:10.038244 | orchestrator | 2025-06-22 19:49:10.039450 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:49:10.039992 | orchestrator | Sunday 22 June 2025 19:49:10 +0000 (0:00:00.197) 0:00:55.719 *********** 2025-06-22 19:49:10.247824 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:49:10.248880 | orchestrator | 2025-06-22 19:49:10.250635 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:49:10.251582 | orchestrator | Sunday 22 June 2025 19:49:10 +0000 (0:00:00.207) 0:00:55.927 *********** 2025-06-22 19:49:10.446457 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:49:10.446923 | orchestrator | 2025-06-22 19:49:10.449001 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-06-22 19:49:10.450527 | orchestrator | Sunday 22 June 2025 19:49:10 +0000 (0:00:00.200) 0:00:56.128 *********** 2025-06-22 19:49:10.652599 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:49:10.652820 | orchestrator | 2025-06-22 19:49:10.654745 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-06-22 19:49:10.655632 | orchestrator | Sunday 22 June 2025 19:49:10 +0000 (0:00:00.205) 0:00:56.334 *********** 2025-06-22 19:49:11.002572 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:49:11.003600 | orchestrator | 2025-06-22 19:49:11.005740 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-06-22 19:49:11.006446 | orchestrator | Sunday 22 June 2025 19:49:10 +0000 (0:00:00.349) 0:00:56.683 *********** 2025-06-22 19:49:11.188536 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3108d6cc-64da-58c4-8e22-262ec3caa421'}}) 2025-06-22 19:49:11.189690 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '39fb6ae0-c3e6-59b9-8b54-9251bb7c5136'}}) 2025-06-22 19:49:11.190554 | orchestrator | 2025-06-22 19:49:11.191668 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-06-22 19:49:11.192537 | orchestrator | Sunday 22 June 2025 19:49:11 +0000 (0:00:00.185) 0:00:56.868 *********** 2025-06-22 19:49:13.074436 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-3108d6cc-64da-58c4-8e22-262ec3caa421', 'data_vg': 'ceph-3108d6cc-64da-58c4-8e22-262ec3caa421'}) 2025-06-22 19:49:13.074581 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-39fb6ae0-c3e6-59b9-8b54-9251bb7c5136', 'data_vg': 'ceph-39fb6ae0-c3e6-59b9-8b54-9251bb7c5136'}) 2025-06-22 19:49:13.075582 | orchestrator | 2025-06-22 19:49:13.076671 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-06-22 19:49:13.077800 | orchestrator | Sunday 22 June 2025 19:49:13 +0000 (0:00:01.886) 0:00:58.755 *********** 2025-06-22 19:49:13.238008 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3108d6cc-64da-58c4-8e22-262ec3caa421', 'data_vg': 'ceph-3108d6cc-64da-58c4-8e22-262ec3caa421'})  2025-06-22 19:49:13.238515 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-39fb6ae0-c3e6-59b9-8b54-9251bb7c5136', 'data_vg': 'ceph-39fb6ae0-c3e6-59b9-8b54-9251bb7c5136'})  2025-06-22 19:49:13.239910 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:49:13.241246 | orchestrator | 2025-06-22 19:49:13.244714 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-06-22 19:49:13.245955 | orchestrator | Sunday 22 June 2025 19:49:13 +0000 (0:00:00.164) 0:00:58.919 *********** 2025-06-22 19:49:14.692684 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-3108d6cc-64da-58c4-8e22-262ec3caa421', 'data_vg': 'ceph-3108d6cc-64da-58c4-8e22-262ec3caa421'}) 2025-06-22 19:49:14.693866 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-39fb6ae0-c3e6-59b9-8b54-9251bb7c5136', 'data_vg': 'ceph-39fb6ae0-c3e6-59b9-8b54-9251bb7c5136'}) 2025-06-22 19:49:14.696115 | orchestrator | 2025-06-22 19:49:14.696204 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-06-22 19:49:14.696773 | orchestrator | Sunday 22 June 2025 19:49:14 +0000 (0:00:01.453) 0:01:00.373 *********** 2025-06-22 19:49:14.841260 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3108d6cc-64da-58c4-8e22-262ec3caa421', 'data_vg': 'ceph-3108d6cc-64da-58c4-8e22-262ec3caa421'})  2025-06-22 19:49:14.841897 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-39fb6ae0-c3e6-59b9-8b54-9251bb7c5136', 'data_vg': 'ceph-39fb6ae0-c3e6-59b9-8b54-9251bb7c5136'})  2025-06-22 19:49:14.842829 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:49:14.844937 | orchestrator | 2025-06-22 19:49:14.844998 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-06-22 19:49:14.846077 | orchestrator | Sunday 22 June 2025 19:49:14 +0000 (0:00:00.149) 0:01:00.522 *********** 2025-06-22 19:49:14.987473 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:49:14.988113 | orchestrator | 2025-06-22 19:49:14.990363 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-06-22 19:49:14.990880 | orchestrator | Sunday 22 June 2025 19:49:14 +0000 (0:00:00.145) 0:01:00.668 *********** 2025-06-22 19:49:15.142761 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3108d6cc-64da-58c4-8e22-262ec3caa421', 'data_vg': 'ceph-3108d6cc-64da-58c4-8e22-262ec3caa421'})  2025-06-22 19:49:15.143277 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-39fb6ae0-c3e6-59b9-8b54-9251bb7c5136', 'data_vg': 'ceph-39fb6ae0-c3e6-59b9-8b54-9251bb7c5136'})  2025-06-22 19:49:15.144208 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:49:15.144934 | orchestrator | 2025-06-22 19:49:15.145960 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-06-22 19:49:15.146496 | orchestrator | Sunday 22 June 2025 19:49:15 +0000 (0:00:00.156) 0:01:00.825 *********** 2025-06-22 19:49:15.299340 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:49:15.300615 | orchestrator | 2025-06-22 19:49:15.301344 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-06-22 19:49:15.302827 | orchestrator | Sunday 22 June 2025 19:49:15 +0000 (0:00:00.154) 0:01:00.979 *********** 2025-06-22 19:49:15.446321 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3108d6cc-64da-58c4-8e22-262ec3caa421', 'data_vg': 'ceph-3108d6cc-64da-58c4-8e22-262ec3caa421'})  2025-06-22 19:49:15.446674 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-39fb6ae0-c3e6-59b9-8b54-9251bb7c5136', 'data_vg': 'ceph-39fb6ae0-c3e6-59b9-8b54-9251bb7c5136'})  2025-06-22 19:49:15.447904 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:49:15.449256 | orchestrator | 2025-06-22 19:49:15.449980 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-06-22 19:49:15.450727 | orchestrator | Sunday 22 June 2025 19:49:15 +0000 (0:00:00.148) 0:01:01.128 *********** 2025-06-22 19:49:15.584365 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:49:15.584778 | orchestrator | 2025-06-22 19:49:15.585652 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-06-22 19:49:15.586776 | orchestrator | Sunday 22 June 2025 19:49:15 +0000 (0:00:00.137) 0:01:01.265 *********** 2025-06-22 19:49:15.737319 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3108d6cc-64da-58c4-8e22-262ec3caa421', 'data_vg': 'ceph-3108d6cc-64da-58c4-8e22-262ec3caa421'})  2025-06-22 19:49:15.737505 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-39fb6ae0-c3e6-59b9-8b54-9251bb7c5136', 'data_vg': 'ceph-39fb6ae0-c3e6-59b9-8b54-9251bb7c5136'})  2025-06-22 19:49:15.738351 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:49:15.739182 | orchestrator | 2025-06-22 19:49:15.739920 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-06-22 19:49:15.740664 | orchestrator | Sunday 22 June 2025 19:49:15 +0000 (0:00:00.153) 0:01:01.419 *********** 2025-06-22 19:49:15.885482 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:49:15.885864 | orchestrator | 2025-06-22 19:49:15.887230 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-06-22 19:49:15.888444 | orchestrator | Sunday 22 June 2025 19:49:15 +0000 (0:00:00.147) 0:01:01.566 *********** 2025-06-22 19:49:16.232808 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3108d6cc-64da-58c4-8e22-262ec3caa421', 'data_vg': 'ceph-3108d6cc-64da-58c4-8e22-262ec3caa421'})  2025-06-22 19:49:16.233606 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-39fb6ae0-c3e6-59b9-8b54-9251bb7c5136', 'data_vg': 'ceph-39fb6ae0-c3e6-59b9-8b54-9251bb7c5136'})  2025-06-22 19:49:16.234389 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:49:16.235735 | orchestrator | 2025-06-22 19:49:16.236873 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-06-22 19:49:16.237602 | orchestrator | Sunday 22 June 2025 19:49:16 +0000 (0:00:00.347) 0:01:01.914 *********** 2025-06-22 19:49:16.452069 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3108d6cc-64da-58c4-8e22-262ec3caa421', 'data_vg': 'ceph-3108d6cc-64da-58c4-8e22-262ec3caa421'})  2025-06-22 19:49:16.453535 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-39fb6ae0-c3e6-59b9-8b54-9251bb7c5136', 'data_vg': 'ceph-39fb6ae0-c3e6-59b9-8b54-9251bb7c5136'})  2025-06-22 19:49:16.454730 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:49:16.456601 | orchestrator | 2025-06-22 19:49:16.457889 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-06-22 19:49:16.458877 | orchestrator | Sunday 22 June 2025 19:49:16 +0000 (0:00:00.218) 0:01:02.132 *********** 2025-06-22 19:49:16.620864 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3108d6cc-64da-58c4-8e22-262ec3caa421', 'data_vg': 'ceph-3108d6cc-64da-58c4-8e22-262ec3caa421'})  2025-06-22 19:49:16.623264 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-39fb6ae0-c3e6-59b9-8b54-9251bb7c5136', 'data_vg': 'ceph-39fb6ae0-c3e6-59b9-8b54-9251bb7c5136'})  2025-06-22 19:49:16.624117 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:49:16.625387 | orchestrator | 2025-06-22 19:49:16.625912 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-06-22 19:49:16.626687 | orchestrator | Sunday 22 June 2025 19:49:16 +0000 (0:00:00.169) 0:01:02.301 *********** 2025-06-22 19:49:16.773658 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:49:16.775219 | orchestrator | 2025-06-22 19:49:16.775895 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-06-22 19:49:16.777644 | orchestrator | Sunday 22 June 2025 19:49:16 +0000 (0:00:00.153) 0:01:02.455 *********** 2025-06-22 19:49:16.900850 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:49:16.900936 | orchestrator | 2025-06-22 19:49:16.901977 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-06-22 19:49:16.903170 | orchestrator | Sunday 22 June 2025 19:49:16 +0000 (0:00:00.126) 0:01:02.581 *********** 2025-06-22 19:49:17.039609 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:49:17.039849 | orchestrator | 2025-06-22 19:49:17.042113 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-06-22 19:49:17.043046 | orchestrator | Sunday 22 June 2025 19:49:17 +0000 (0:00:00.138) 0:01:02.720 *********** 2025-06-22 19:49:17.201145 | orchestrator | ok: [testbed-node-5] => { 2025-06-22 19:49:17.201359 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-06-22 19:49:17.202954 | orchestrator | } 2025-06-22 19:49:17.203440 | orchestrator | 2025-06-22 19:49:17.205308 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-06-22 19:49:17.205898 | orchestrator | Sunday 22 June 2025 19:49:17 +0000 (0:00:00.161) 0:01:02.882 *********** 2025-06-22 19:49:17.358613 | orchestrator | ok: [testbed-node-5] => { 2025-06-22 19:49:17.358727 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-06-22 19:49:17.358900 | orchestrator | } 2025-06-22 19:49:17.359747 | orchestrator | 2025-06-22 19:49:17.360108 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-06-22 19:49:17.360982 | orchestrator | Sunday 22 June 2025 19:49:17 +0000 (0:00:00.155) 0:01:03.038 *********** 2025-06-22 19:49:17.513407 | orchestrator | ok: [testbed-node-5] => { 2025-06-22 19:49:17.514218 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-06-22 19:49:17.515350 | orchestrator | } 2025-06-22 19:49:17.515855 | orchestrator | 2025-06-22 19:49:17.516557 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-06-22 19:49:17.518456 | orchestrator | Sunday 22 June 2025 19:49:17 +0000 (0:00:00.156) 0:01:03.194 *********** 2025-06-22 19:49:18.065940 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:49:18.066286 | orchestrator | 2025-06-22 19:49:18.067725 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-06-22 19:49:18.068958 | orchestrator | Sunday 22 June 2025 19:49:18 +0000 (0:00:00.551) 0:01:03.746 *********** 2025-06-22 19:49:18.590366 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:49:18.591105 | orchestrator | 2025-06-22 19:49:18.592199 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-06-22 19:49:18.592466 | orchestrator | Sunday 22 June 2025 19:49:18 +0000 (0:00:00.523) 0:01:04.269 *********** 2025-06-22 19:49:19.119251 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:49:19.120400 | orchestrator | 2025-06-22 19:49:19.121774 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-06-22 19:49:19.123408 | orchestrator | Sunday 22 June 2025 19:49:19 +0000 (0:00:00.529) 0:01:04.799 *********** 2025-06-22 19:49:19.482067 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:49:19.483100 | orchestrator | 2025-06-22 19:49:19.484680 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-06-22 19:49:19.485353 | orchestrator | Sunday 22 June 2025 19:49:19 +0000 (0:00:00.360) 0:01:05.160 *********** 2025-06-22 19:49:19.606743 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:49:19.607800 | orchestrator | 2025-06-22 19:49:19.607883 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-06-22 19:49:19.609013 | orchestrator | Sunday 22 June 2025 19:49:19 +0000 (0:00:00.127) 0:01:05.287 *********** 2025-06-22 19:49:19.730706 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:49:19.731567 | orchestrator | 2025-06-22 19:49:19.732387 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-06-22 19:49:19.733413 | orchestrator | Sunday 22 June 2025 19:49:19 +0000 (0:00:00.124) 0:01:05.412 *********** 2025-06-22 19:49:19.901212 | orchestrator | ok: [testbed-node-5] => { 2025-06-22 19:49:19.901716 | orchestrator |  "vgs_report": { 2025-06-22 19:49:19.903091 | orchestrator |  "vg": [] 2025-06-22 19:49:19.904370 | orchestrator |  } 2025-06-22 19:49:19.905874 | orchestrator | } 2025-06-22 19:49:19.906823 | orchestrator | 2025-06-22 19:49:19.907721 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-06-22 19:49:19.908282 | orchestrator | Sunday 22 June 2025 19:49:19 +0000 (0:00:00.169) 0:01:05.581 *********** 2025-06-22 19:49:20.051069 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:49:20.051249 | orchestrator | 2025-06-22 19:49:20.051276 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-06-22 19:49:20.052657 | orchestrator | Sunday 22 June 2025 19:49:20 +0000 (0:00:00.149) 0:01:05.731 *********** 2025-06-22 19:49:20.215666 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:49:20.216121 | orchestrator | 2025-06-22 19:49:20.216266 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-06-22 19:49:20.217099 | orchestrator | Sunday 22 June 2025 19:49:20 +0000 (0:00:00.165) 0:01:05.896 *********** 2025-06-22 19:49:20.375229 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:49:20.376348 | orchestrator | 2025-06-22 19:49:20.377927 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-06-22 19:49:20.379278 | orchestrator | Sunday 22 June 2025 19:49:20 +0000 (0:00:00.159) 0:01:06.055 *********** 2025-06-22 19:49:20.533652 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:49:20.534778 | orchestrator | 2025-06-22 19:49:20.537003 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-06-22 19:49:20.538268 | orchestrator | Sunday 22 June 2025 19:49:20 +0000 (0:00:00.158) 0:01:06.214 *********** 2025-06-22 19:49:20.679631 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:49:20.680124 | orchestrator | 2025-06-22 19:49:20.680913 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-06-22 19:49:20.681687 | orchestrator | Sunday 22 June 2025 19:49:20 +0000 (0:00:00.145) 0:01:06.359 *********** 2025-06-22 19:49:20.817578 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:49:20.817842 | orchestrator | 2025-06-22 19:49:20.822725 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-06-22 19:49:20.823376 | orchestrator | Sunday 22 June 2025 19:49:20 +0000 (0:00:00.136) 0:01:06.496 *********** 2025-06-22 19:49:20.946703 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:49:20.947762 | orchestrator | 2025-06-22 19:49:20.948592 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-06-22 19:49:20.950228 | orchestrator | Sunday 22 June 2025 19:49:20 +0000 (0:00:00.131) 0:01:06.627 *********** 2025-06-22 19:49:21.128950 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:49:21.130295 | orchestrator | 2025-06-22 19:49:21.131037 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-06-22 19:49:21.132818 | orchestrator | Sunday 22 June 2025 19:49:21 +0000 (0:00:00.181) 0:01:06.809 *********** 2025-06-22 19:49:21.481693 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:49:21.482195 | orchestrator | 2025-06-22 19:49:21.484784 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-06-22 19:49:21.485372 | orchestrator | Sunday 22 June 2025 19:49:21 +0000 (0:00:00.351) 0:01:07.160 *********** 2025-06-22 19:49:21.646469 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:49:21.647711 | orchestrator | 2025-06-22 19:49:21.648552 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-06-22 19:49:21.649876 | orchestrator | Sunday 22 June 2025 19:49:21 +0000 (0:00:00.165) 0:01:07.326 *********** 2025-06-22 19:49:21.785412 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:49:21.787048 | orchestrator | 2025-06-22 19:49:21.788014 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-06-22 19:49:21.788995 | orchestrator | Sunday 22 June 2025 19:49:21 +0000 (0:00:00.139) 0:01:07.466 *********** 2025-06-22 19:49:21.935733 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:49:21.936109 | orchestrator | 2025-06-22 19:49:21.936659 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-06-22 19:49:21.937064 | orchestrator | Sunday 22 June 2025 19:49:21 +0000 (0:00:00.151) 0:01:07.617 *********** 2025-06-22 19:49:22.163844 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:49:22.164931 | orchestrator | 2025-06-22 19:49:22.166313 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-06-22 19:49:22.167679 | orchestrator | Sunday 22 June 2025 19:49:22 +0000 (0:00:00.224) 0:01:07.842 *********** 2025-06-22 19:49:22.320065 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:49:22.321651 | orchestrator | 2025-06-22 19:49:22.323363 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-06-22 19:49:22.324785 | orchestrator | Sunday 22 June 2025 19:49:22 +0000 (0:00:00.159) 0:01:08.001 *********** 2025-06-22 19:49:22.506345 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3108d6cc-64da-58c4-8e22-262ec3caa421', 'data_vg': 'ceph-3108d6cc-64da-58c4-8e22-262ec3caa421'})  2025-06-22 19:49:22.507358 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-39fb6ae0-c3e6-59b9-8b54-9251bb7c5136', 'data_vg': 'ceph-39fb6ae0-c3e6-59b9-8b54-9251bb7c5136'})  2025-06-22 19:49:22.508738 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:49:22.509903 | orchestrator | 2025-06-22 19:49:22.510840 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-06-22 19:49:22.511989 | orchestrator | Sunday 22 June 2025 19:49:22 +0000 (0:00:00.185) 0:01:08.186 *********** 2025-06-22 19:49:22.671271 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3108d6cc-64da-58c4-8e22-262ec3caa421', 'data_vg': 'ceph-3108d6cc-64da-58c4-8e22-262ec3caa421'})  2025-06-22 19:49:22.671814 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-39fb6ae0-c3e6-59b9-8b54-9251bb7c5136', 'data_vg': 'ceph-39fb6ae0-c3e6-59b9-8b54-9251bb7c5136'})  2025-06-22 19:49:22.672988 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:49:22.674404 | orchestrator | 2025-06-22 19:49:22.675608 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-06-22 19:49:22.676554 | orchestrator | Sunday 22 June 2025 19:49:22 +0000 (0:00:00.165) 0:01:08.351 *********** 2025-06-22 19:49:22.851234 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3108d6cc-64da-58c4-8e22-262ec3caa421', 'data_vg': 'ceph-3108d6cc-64da-58c4-8e22-262ec3caa421'})  2025-06-22 19:49:22.852188 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-39fb6ae0-c3e6-59b9-8b54-9251bb7c5136', 'data_vg': 'ceph-39fb6ae0-c3e6-59b9-8b54-9251bb7c5136'})  2025-06-22 19:49:22.853752 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:49:22.856064 | orchestrator | 2025-06-22 19:49:22.857137 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-06-22 19:49:22.860533 | orchestrator | Sunday 22 June 2025 19:49:22 +0000 (0:00:00.179) 0:01:08.531 *********** 2025-06-22 19:49:23.000608 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3108d6cc-64da-58c4-8e22-262ec3caa421', 'data_vg': 'ceph-3108d6cc-64da-58c4-8e22-262ec3caa421'})  2025-06-22 19:49:23.002065 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-39fb6ae0-c3e6-59b9-8b54-9251bb7c5136', 'data_vg': 'ceph-39fb6ae0-c3e6-59b9-8b54-9251bb7c5136'})  2025-06-22 19:49:23.008469 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:49:23.008512 | orchestrator | 2025-06-22 19:49:23.008525 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-06-22 19:49:23.008539 | orchestrator | Sunday 22 June 2025 19:49:22 +0000 (0:00:00.149) 0:01:08.681 *********** 2025-06-22 19:49:23.168080 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3108d6cc-64da-58c4-8e22-262ec3caa421', 'data_vg': 'ceph-3108d6cc-64da-58c4-8e22-262ec3caa421'})  2025-06-22 19:49:23.170592 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-39fb6ae0-c3e6-59b9-8b54-9251bb7c5136', 'data_vg': 'ceph-39fb6ae0-c3e6-59b9-8b54-9251bb7c5136'})  2025-06-22 19:49:23.173638 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:49:23.173673 | orchestrator | 2025-06-22 19:49:23.173681 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-06-22 19:49:23.176416 | orchestrator | Sunday 22 June 2025 19:49:23 +0000 (0:00:00.167) 0:01:08.848 *********** 2025-06-22 19:49:23.318983 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3108d6cc-64da-58c4-8e22-262ec3caa421', 'data_vg': 'ceph-3108d6cc-64da-58c4-8e22-262ec3caa421'})  2025-06-22 19:49:23.320979 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-39fb6ae0-c3e6-59b9-8b54-9251bb7c5136', 'data_vg': 'ceph-39fb6ae0-c3e6-59b9-8b54-9251bb7c5136'})  2025-06-22 19:49:23.322197 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:49:23.323731 | orchestrator | 2025-06-22 19:49:23.324942 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-06-22 19:49:23.326384 | orchestrator | Sunday 22 June 2025 19:49:23 +0000 (0:00:00.151) 0:01:09.000 *********** 2025-06-22 19:49:23.702891 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3108d6cc-64da-58c4-8e22-262ec3caa421', 'data_vg': 'ceph-3108d6cc-64da-58c4-8e22-262ec3caa421'})  2025-06-22 19:49:23.704042 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-39fb6ae0-c3e6-59b9-8b54-9251bb7c5136', 'data_vg': 'ceph-39fb6ae0-c3e6-59b9-8b54-9251bb7c5136'})  2025-06-22 19:49:23.705040 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:49:23.706492 | orchestrator | 2025-06-22 19:49:23.707304 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-06-22 19:49:23.708143 | orchestrator | Sunday 22 June 2025 19:49:23 +0000 (0:00:00.381) 0:01:09.382 *********** 2025-06-22 19:49:23.873913 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3108d6cc-64da-58c4-8e22-262ec3caa421', 'data_vg': 'ceph-3108d6cc-64da-58c4-8e22-262ec3caa421'})  2025-06-22 19:49:23.875016 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-39fb6ae0-c3e6-59b9-8b54-9251bb7c5136', 'data_vg': 'ceph-39fb6ae0-c3e6-59b9-8b54-9251bb7c5136'})  2025-06-22 19:49:23.875995 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:49:23.877273 | orchestrator | 2025-06-22 19:49:23.878133 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-06-22 19:49:23.880135 | orchestrator | Sunday 22 June 2025 19:49:23 +0000 (0:00:00.170) 0:01:09.553 *********** 2025-06-22 19:49:24.410497 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:49:24.410604 | orchestrator | 2025-06-22 19:49:24.412370 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-06-22 19:49:24.413599 | orchestrator | Sunday 22 June 2025 19:49:24 +0000 (0:00:00.538) 0:01:10.091 *********** 2025-06-22 19:49:24.986364 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:49:24.987704 | orchestrator | 2025-06-22 19:49:24.988740 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-06-22 19:49:24.989430 | orchestrator | Sunday 22 June 2025 19:49:24 +0000 (0:00:00.574) 0:01:10.665 *********** 2025-06-22 19:49:25.156179 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:49:25.157408 | orchestrator | 2025-06-22 19:49:25.158962 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-06-22 19:49:25.160296 | orchestrator | Sunday 22 June 2025 19:49:25 +0000 (0:00:00.171) 0:01:10.837 *********** 2025-06-22 19:49:25.342615 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-3108d6cc-64da-58c4-8e22-262ec3caa421', 'vg_name': 'ceph-3108d6cc-64da-58c4-8e22-262ec3caa421'}) 2025-06-22 19:49:25.343578 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-39fb6ae0-c3e6-59b9-8b54-9251bb7c5136', 'vg_name': 'ceph-39fb6ae0-c3e6-59b9-8b54-9251bb7c5136'}) 2025-06-22 19:49:25.344615 | orchestrator | 2025-06-22 19:49:25.345648 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-06-22 19:49:25.346723 | orchestrator | Sunday 22 June 2025 19:49:25 +0000 (0:00:00.186) 0:01:11.023 *********** 2025-06-22 19:49:25.571670 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3108d6cc-64da-58c4-8e22-262ec3caa421', 'data_vg': 'ceph-3108d6cc-64da-58c4-8e22-262ec3caa421'})  2025-06-22 19:49:25.575109 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-39fb6ae0-c3e6-59b9-8b54-9251bb7c5136', 'data_vg': 'ceph-39fb6ae0-c3e6-59b9-8b54-9251bb7c5136'})  2025-06-22 19:49:25.575203 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:49:25.576232 | orchestrator | 2025-06-22 19:49:25.576910 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-06-22 19:49:25.577427 | orchestrator | Sunday 22 June 2025 19:49:25 +0000 (0:00:00.222) 0:01:11.246 *********** 2025-06-22 19:49:25.735066 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3108d6cc-64da-58c4-8e22-262ec3caa421', 'data_vg': 'ceph-3108d6cc-64da-58c4-8e22-262ec3caa421'})  2025-06-22 19:49:25.735306 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-39fb6ae0-c3e6-59b9-8b54-9251bb7c5136', 'data_vg': 'ceph-39fb6ae0-c3e6-59b9-8b54-9251bb7c5136'})  2025-06-22 19:49:25.736102 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:49:25.737233 | orchestrator | 2025-06-22 19:49:25.738333 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-06-22 19:49:25.738924 | orchestrator | Sunday 22 June 2025 19:49:25 +0000 (0:00:00.169) 0:01:11.415 *********** 2025-06-22 19:49:25.896739 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3108d6cc-64da-58c4-8e22-262ec3caa421', 'data_vg': 'ceph-3108d6cc-64da-58c4-8e22-262ec3caa421'})  2025-06-22 19:49:25.897630 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-39fb6ae0-c3e6-59b9-8b54-9251bb7c5136', 'data_vg': 'ceph-39fb6ae0-c3e6-59b9-8b54-9251bb7c5136'})  2025-06-22 19:49:25.901882 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:49:25.901948 | orchestrator | 2025-06-22 19:49:25.901962 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-06-22 19:49:25.901975 | orchestrator | Sunday 22 June 2025 19:49:25 +0000 (0:00:00.161) 0:01:11.577 *********** 2025-06-22 19:49:26.071445 | orchestrator | ok: [testbed-node-5] => { 2025-06-22 19:49:26.072333 | orchestrator |  "lvm_report": { 2025-06-22 19:49:26.072375 | orchestrator |  "lv": [ 2025-06-22 19:49:26.073587 | orchestrator |  { 2025-06-22 19:49:26.075132 | orchestrator |  "lv_name": "osd-block-3108d6cc-64da-58c4-8e22-262ec3caa421", 2025-06-22 19:49:26.075603 | orchestrator |  "vg_name": "ceph-3108d6cc-64da-58c4-8e22-262ec3caa421" 2025-06-22 19:49:26.077772 | orchestrator |  }, 2025-06-22 19:49:26.081397 | orchestrator |  { 2025-06-22 19:49:26.081428 | orchestrator |  "lv_name": "osd-block-39fb6ae0-c3e6-59b9-8b54-9251bb7c5136", 2025-06-22 19:49:26.081527 | orchestrator |  "vg_name": "ceph-39fb6ae0-c3e6-59b9-8b54-9251bb7c5136" 2025-06-22 19:49:26.083699 | orchestrator |  } 2025-06-22 19:49:26.086444 | orchestrator |  ], 2025-06-22 19:49:26.086493 | orchestrator |  "pv": [ 2025-06-22 19:49:26.087108 | orchestrator |  { 2025-06-22 19:49:26.088295 | orchestrator |  "pv_name": "/dev/sdb", 2025-06-22 19:49:26.089194 | orchestrator |  "vg_name": "ceph-3108d6cc-64da-58c4-8e22-262ec3caa421" 2025-06-22 19:49:26.089647 | orchestrator |  }, 2025-06-22 19:49:26.091116 | orchestrator |  { 2025-06-22 19:49:26.091187 | orchestrator |  "pv_name": "/dev/sdc", 2025-06-22 19:49:26.091302 | orchestrator |  "vg_name": "ceph-39fb6ae0-c3e6-59b9-8b54-9251bb7c5136" 2025-06-22 19:49:26.091779 | orchestrator |  } 2025-06-22 19:49:26.092604 | orchestrator |  ] 2025-06-22 19:49:26.093476 | orchestrator |  } 2025-06-22 19:49:26.095701 | orchestrator | } 2025-06-22 19:49:26.096138 | orchestrator | 2025-06-22 19:49:26.097950 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:49:26.098393 | orchestrator | 2025-06-22 19:49:26 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-22 19:49:26.098529 | orchestrator | 2025-06-22 19:49:26 | INFO  | Please wait and do not abort execution. 2025-06-22 19:49:26.100762 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-06-22 19:49:26.100829 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-06-22 19:49:26.100852 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-06-22 19:49:26.100949 | orchestrator | 2025-06-22 19:49:26.104462 | orchestrator | 2025-06-22 19:49:26.107383 | orchestrator | 2025-06-22 19:49:26.109692 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:49:26.110600 | orchestrator | Sunday 22 June 2025 19:49:26 +0000 (0:00:00.171) 0:01:11.749 *********** 2025-06-22 19:49:26.111735 | orchestrator | =============================================================================== 2025-06-22 19:49:26.114664 | orchestrator | Create block VGs -------------------------------------------------------- 5.73s 2025-06-22 19:49:26.114748 | orchestrator | Create block LVs -------------------------------------------------------- 4.19s 2025-06-22 19:49:26.114758 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.95s 2025-06-22 19:49:26.114765 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.70s 2025-06-22 19:49:26.115696 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.64s 2025-06-22 19:49:26.117103 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.57s 2025-06-22 19:49:26.117125 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.54s 2025-06-22 19:49:26.117547 | orchestrator | Add known partitions to the list of available block devices ------------- 1.36s 2025-06-22 19:49:26.118795 | orchestrator | Add known links to the list of available block devices ------------------ 1.24s 2025-06-22 19:49:26.119200 | orchestrator | Print LVM report data --------------------------------------------------- 0.95s 2025-06-22 19:49:26.119824 | orchestrator | Add known partitions to the list of available block devices ------------- 0.91s 2025-06-22 19:49:26.120138 | orchestrator | Add known partitions to the list of available block devices ------------- 0.84s 2025-06-22 19:49:26.120461 | orchestrator | Get initial list of available block devices ----------------------------- 0.73s 2025-06-22 19:49:26.120958 | orchestrator | Create DB LVs for ceph_db_wal_devices ----------------------------------- 0.72s 2025-06-22 19:49:26.121389 | orchestrator | Add known links to the list of available block devices ------------------ 0.72s 2025-06-22 19:49:26.121870 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.71s 2025-06-22 19:49:26.122103 | orchestrator | Print 'Create DB LVs for ceph_db_devices' ------------------------------- 0.70s 2025-06-22 19:49:26.122891 | orchestrator | Print 'Create WAL LVs for ceph_wal_devices' ----------------------------- 0.70s 2025-06-22 19:49:26.122906 | orchestrator | Print size needed for LVs on ceph_wal_devices --------------------------- 0.69s 2025-06-22 19:49:26.123470 | orchestrator | Add known links to the list of available block devices ------------------ 0.65s 2025-06-22 19:49:28.522189 | orchestrator | Registering Redlock._acquired_script 2025-06-22 19:49:28.522297 | orchestrator | Registering Redlock._extend_script 2025-06-22 19:49:28.522312 | orchestrator | Registering Redlock._release_script 2025-06-22 19:49:28.584105 | orchestrator | 2025-06-22 19:49:28 | INFO  | Task 07dbc94e-8bd7-4212-bb50-4e26d991f156 (facts) was prepared for execution. 2025-06-22 19:49:28.584236 | orchestrator | 2025-06-22 19:49:28 | INFO  | It takes a moment until task 07dbc94e-8bd7-4212-bb50-4e26d991f156 (facts) has been started and output is visible here. 2025-06-22 19:49:32.910690 | orchestrator | 2025-06-22 19:49:32.910812 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-06-22 19:49:32.911711 | orchestrator | 2025-06-22 19:49:32.913719 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-06-22 19:49:32.914703 | orchestrator | Sunday 22 June 2025 19:49:32 +0000 (0:00:00.271) 0:00:00.271 *********** 2025-06-22 19:49:34.092664 | orchestrator | ok: [testbed-manager] 2025-06-22 19:49:34.095404 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:49:34.095433 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:49:34.095594 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:49:34.096877 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:49:34.097892 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:49:34.099310 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:49:34.100380 | orchestrator | 2025-06-22 19:49:34.101784 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-06-22 19:49:34.102359 | orchestrator | Sunday 22 June 2025 19:49:34 +0000 (0:00:01.181) 0:00:01.452 *********** 2025-06-22 19:49:34.281267 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:49:34.361137 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:49:34.445068 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:49:34.529088 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:49:34.610948 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:49:35.462720 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:49:35.464424 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:49:35.464585 | orchestrator | 2025-06-22 19:49:35.465766 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-22 19:49:35.466968 | orchestrator | 2025-06-22 19:49:35.467892 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-22 19:49:35.468894 | orchestrator | Sunday 22 June 2025 19:49:35 +0000 (0:00:01.373) 0:00:02.826 *********** 2025-06-22 19:49:40.792229 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:49:40.792514 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:49:40.793293 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:49:40.794897 | orchestrator | ok: [testbed-manager] 2025-06-22 19:49:40.796247 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:49:40.797011 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:49:40.797851 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:49:40.798602 | orchestrator | 2025-06-22 19:49:40.799521 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-06-22 19:49:40.800464 | orchestrator | 2025-06-22 19:49:40.801026 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-06-22 19:49:40.802679 | orchestrator | Sunday 22 June 2025 19:49:40 +0000 (0:00:05.329) 0:00:08.155 *********** 2025-06-22 19:49:40.971929 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:49:41.056706 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:49:41.144384 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:49:41.247398 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:49:41.338115 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:49:41.387195 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:49:41.387296 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:49:41.387310 | orchestrator | 2025-06-22 19:49:41.387325 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:49:41.387444 | orchestrator | 2025-06-22 19:49:41 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-22 19:49:41.387462 | orchestrator | 2025-06-22 19:49:41 | INFO  | Please wait and do not abort execution. 2025-06-22 19:49:41.387960 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:49:41.388220 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:49:41.388667 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:49:41.389142 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:49:41.389346 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:49:41.389860 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:49:41.390236 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:49:41.390653 | orchestrator | 2025-06-22 19:49:41.390989 | orchestrator | 2025-06-22 19:49:41.391450 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:49:41.391862 | orchestrator | Sunday 22 June 2025 19:49:41 +0000 (0:00:00.597) 0:00:08.752 *********** 2025-06-22 19:49:41.392715 | orchestrator | =============================================================================== 2025-06-22 19:49:41.392737 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.33s 2025-06-22 19:49:41.393071 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.37s 2025-06-22 19:49:41.393552 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.18s 2025-06-22 19:49:41.394095 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.60s 2025-06-22 19:49:42.022729 | orchestrator | 2025-06-22 19:49:42.025433 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Sun Jun 22 19:49:42 UTC 2025 2025-06-22 19:49:42.025510 | orchestrator | 2025-06-22 19:49:43.846262 | orchestrator | 2025-06-22 19:49:43 | INFO  | Collection nutshell is prepared for execution 2025-06-22 19:49:43.846365 | orchestrator | 2025-06-22 19:49:43 | INFO  | D [0] - dotfiles 2025-06-22 19:49:43.850787 | orchestrator | Registering Redlock._acquired_script 2025-06-22 19:49:43.850822 | orchestrator | Registering Redlock._extend_script 2025-06-22 19:49:43.850834 | orchestrator | Registering Redlock._release_script 2025-06-22 19:49:43.855703 | orchestrator | 2025-06-22 19:49:43 | INFO  | D [0] - homer 2025-06-22 19:49:43.855797 | orchestrator | 2025-06-22 19:49:43 | INFO  | D [0] - netdata 2025-06-22 19:49:43.855821 | orchestrator | 2025-06-22 19:49:43 | INFO  | D [0] - openstackclient 2025-06-22 19:49:43.855834 | orchestrator | 2025-06-22 19:49:43 | INFO  | D [0] - phpmyadmin 2025-06-22 19:49:43.855872 | orchestrator | 2025-06-22 19:49:43 | INFO  | A [0] - common 2025-06-22 19:49:43.857145 | orchestrator | 2025-06-22 19:49:43 | INFO  | A [1] -- loadbalancer 2025-06-22 19:49:43.857206 | orchestrator | 2025-06-22 19:49:43 | INFO  | D [2] --- opensearch 2025-06-22 19:49:43.857449 | orchestrator | 2025-06-22 19:49:43 | INFO  | A [2] --- mariadb-ng 2025-06-22 19:49:43.857469 | orchestrator | 2025-06-22 19:49:43 | INFO  | D [3] ---- horizon 2025-06-22 19:49:43.857481 | orchestrator | 2025-06-22 19:49:43 | INFO  | A [3] ---- keystone 2025-06-22 19:49:43.857711 | orchestrator | 2025-06-22 19:49:43 | INFO  | A [4] ----- neutron 2025-06-22 19:49:43.857733 | orchestrator | 2025-06-22 19:49:43 | INFO  | D [5] ------ wait-for-nova 2025-06-22 19:49:43.857807 | orchestrator | 2025-06-22 19:49:43 | INFO  | A [5] ------ octavia 2025-06-22 19:49:43.858355 | orchestrator | 2025-06-22 19:49:43 | INFO  | D [4] ----- barbican 2025-06-22 19:49:43.858399 | orchestrator | 2025-06-22 19:49:43 | INFO  | D [4] ----- designate 2025-06-22 19:49:43.858411 | orchestrator | 2025-06-22 19:49:43 | INFO  | D [4] ----- ironic 2025-06-22 19:49:43.858423 | orchestrator | 2025-06-22 19:49:43 | INFO  | D [4] ----- placement 2025-06-22 19:49:43.858435 | orchestrator | 2025-06-22 19:49:43 | INFO  | D [4] ----- magnum 2025-06-22 19:49:43.858836 | orchestrator | 2025-06-22 19:49:43 | INFO  | A [1] -- openvswitch 2025-06-22 19:49:43.858860 | orchestrator | 2025-06-22 19:49:43 | INFO  | D [2] --- ovn 2025-06-22 19:49:43.858872 | orchestrator | 2025-06-22 19:49:43 | INFO  | D [1] -- memcached 2025-06-22 19:49:43.858884 | orchestrator | 2025-06-22 19:49:43 | INFO  | D [1] -- redis 2025-06-22 19:49:43.859283 | orchestrator | 2025-06-22 19:49:43 | INFO  | D [1] -- rabbitmq-ng 2025-06-22 19:49:43.859309 | orchestrator | 2025-06-22 19:49:43 | INFO  | A [0] - kubernetes 2025-06-22 19:49:43.861448 | orchestrator | 2025-06-22 19:49:43 | INFO  | D [1] -- kubeconfig 2025-06-22 19:49:43.861491 | orchestrator | 2025-06-22 19:49:43 | INFO  | A [1] -- copy-kubeconfig 2025-06-22 19:49:43.861503 | orchestrator | 2025-06-22 19:49:43 | INFO  | A [0] - ceph 2025-06-22 19:49:43.862635 | orchestrator | 2025-06-22 19:49:43 | INFO  | A [1] -- ceph-pools 2025-06-22 19:49:43.862669 | orchestrator | 2025-06-22 19:49:43 | INFO  | A [2] --- copy-ceph-keys 2025-06-22 19:49:43.862844 | orchestrator | 2025-06-22 19:49:43 | INFO  | A [3] ---- cephclient 2025-06-22 19:49:43.862876 | orchestrator | 2025-06-22 19:49:43 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-06-22 19:49:43.862889 | orchestrator | 2025-06-22 19:49:43 | INFO  | A [4] ----- wait-for-keystone 2025-06-22 19:49:43.862912 | orchestrator | 2025-06-22 19:49:43 | INFO  | D [5] ------ kolla-ceph-rgw 2025-06-22 19:49:43.862987 | orchestrator | 2025-06-22 19:49:43 | INFO  | D [5] ------ glance 2025-06-22 19:49:43.863002 | orchestrator | 2025-06-22 19:49:43 | INFO  | D [5] ------ cinder 2025-06-22 19:49:43.863326 | orchestrator | 2025-06-22 19:49:43 | INFO  | D [5] ------ nova 2025-06-22 19:49:43.863350 | orchestrator | 2025-06-22 19:49:43 | INFO  | A [4] ----- prometheus 2025-06-22 19:49:43.863422 | orchestrator | 2025-06-22 19:49:43 | INFO  | D [5] ------ grafana 2025-06-22 19:49:44.085727 | orchestrator | 2025-06-22 19:49:44 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-06-22 19:49:44.085840 | orchestrator | 2025-06-22 19:49:44 | INFO  | Tasks are running in the background 2025-06-22 19:49:47.111018 | orchestrator | 2025-06-22 19:49:47 | INFO  | No task IDs specified, wait for all currently running tasks 2025-06-22 19:49:49.260879 | orchestrator | 2025-06-22 19:49:49 | INFO  | Task b34682d8-8ca0-491c-a666-853ab38d78df is in state STARTED 2025-06-22 19:49:49.265944 | orchestrator | 2025-06-22 19:49:49 | INFO  | Task 6cf8e00e-92c6-4a54-a626-fb6881f79e1f is in state STARTED 2025-06-22 19:49:49.267762 | orchestrator | 2025-06-22 19:49:49 | INFO  | Task 397fd4b7-d778-4ed0-8c18-1fb1c5a7afee is in state STARTED 2025-06-22 19:49:49.268788 | orchestrator | 2025-06-22 19:49:49 | INFO  | Task 320391e4-19b9-4128-9901-587a2ad13d7c is in state STARTED 2025-06-22 19:49:49.269374 | orchestrator | 2025-06-22 19:49:49 | INFO  | Task 30617089-7b0e-443a-86d8-d6bf6be7a005 is in state STARTED 2025-06-22 19:49:49.270646 | orchestrator | 2025-06-22 19:49:49 | INFO  | Task 259e6f48-b1f2-4431-8a4b-eb1e83e84d86 is in state STARTED 2025-06-22 19:49:49.273369 | orchestrator | 2025-06-22 19:49:49 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:49:49.273394 | orchestrator | 2025-06-22 19:49:49 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:49:52.329354 | orchestrator | 2025-06-22 19:49:52 | INFO  | Task b34682d8-8ca0-491c-a666-853ab38d78df is in state STARTED 2025-06-22 19:49:52.329728 | orchestrator | 2025-06-22 19:49:52 | INFO  | Task 6cf8e00e-92c6-4a54-a626-fb6881f79e1f is in state STARTED 2025-06-22 19:49:52.330697 | orchestrator | 2025-06-22 19:49:52 | INFO  | Task 397fd4b7-d778-4ed0-8c18-1fb1c5a7afee is in state STARTED 2025-06-22 19:49:52.334341 | orchestrator | 2025-06-22 19:49:52 | INFO  | Task 320391e4-19b9-4128-9901-587a2ad13d7c is in state STARTED 2025-06-22 19:49:52.343809 | orchestrator | 2025-06-22 19:49:52 | INFO  | Task 30617089-7b0e-443a-86d8-d6bf6be7a005 is in state STARTED 2025-06-22 19:49:52.346793 | orchestrator | 2025-06-22 19:49:52 | INFO  | Task 259e6f48-b1f2-4431-8a4b-eb1e83e84d86 is in state STARTED 2025-06-22 19:49:52.346869 | orchestrator | 2025-06-22 19:49:52 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:49:52.346884 | orchestrator | 2025-06-22 19:49:52 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:49:55.440712 | orchestrator | 2025-06-22 19:49:55 | INFO  | Task b34682d8-8ca0-491c-a666-853ab38d78df is in state STARTED 2025-06-22 19:49:55.444634 | orchestrator | 2025-06-22 19:49:55 | INFO  | Task 6cf8e00e-92c6-4a54-a626-fb6881f79e1f is in state STARTED 2025-06-22 19:49:55.446088 | orchestrator | 2025-06-22 19:49:55 | INFO  | Task 397fd4b7-d778-4ed0-8c18-1fb1c5a7afee is in state STARTED 2025-06-22 19:49:55.447160 | orchestrator | 2025-06-22 19:49:55 | INFO  | Task 320391e4-19b9-4128-9901-587a2ad13d7c is in state STARTED 2025-06-22 19:49:55.452012 | orchestrator | 2025-06-22 19:49:55 | INFO  | Task 30617089-7b0e-443a-86d8-d6bf6be7a005 is in state STARTED 2025-06-22 19:49:55.453733 | orchestrator | 2025-06-22 19:49:55 | INFO  | Task 259e6f48-b1f2-4431-8a4b-eb1e83e84d86 is in state STARTED 2025-06-22 19:49:55.456583 | orchestrator | 2025-06-22 19:49:55 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:49:55.456615 | orchestrator | 2025-06-22 19:49:55 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:49:58.519400 | orchestrator | 2025-06-22 19:49:58 | INFO  | Task b34682d8-8ca0-491c-a666-853ab38d78df is in state STARTED 2025-06-22 19:49:58.519755 | orchestrator | 2025-06-22 19:49:58 | INFO  | Task 6cf8e00e-92c6-4a54-a626-fb6881f79e1f is in state STARTED 2025-06-22 19:49:58.520650 | orchestrator | 2025-06-22 19:49:58 | INFO  | Task 397fd4b7-d778-4ed0-8c18-1fb1c5a7afee is in state STARTED 2025-06-22 19:49:58.524545 | orchestrator | 2025-06-22 19:49:58 | INFO  | Task 320391e4-19b9-4128-9901-587a2ad13d7c is in state STARTED 2025-06-22 19:49:58.524621 | orchestrator | 2025-06-22 19:49:58 | INFO  | Task 30617089-7b0e-443a-86d8-d6bf6be7a005 is in state STARTED 2025-06-22 19:49:58.524634 | orchestrator | 2025-06-22 19:49:58 | INFO  | Task 259e6f48-b1f2-4431-8a4b-eb1e83e84d86 is in state STARTED 2025-06-22 19:49:58.524706 | orchestrator | 2025-06-22 19:49:58 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:49:58.524721 | orchestrator | 2025-06-22 19:49:58 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:50:01.597275 | orchestrator | 2025-06-22 19:50:01 | INFO  | Task b34682d8-8ca0-491c-a666-853ab38d78df is in state STARTED 2025-06-22 19:50:01.598088 | orchestrator | 2025-06-22 19:50:01 | INFO  | Task 6cf8e00e-92c6-4a54-a626-fb6881f79e1f is in state STARTED 2025-06-22 19:50:01.607543 | orchestrator | 2025-06-22 19:50:01 | INFO  | Task 397fd4b7-d778-4ed0-8c18-1fb1c5a7afee is in state STARTED 2025-06-22 19:50:01.613821 | orchestrator | 2025-06-22 19:50:01 | INFO  | Task 320391e4-19b9-4128-9901-587a2ad13d7c is in state STARTED 2025-06-22 19:50:01.618664 | orchestrator | 2025-06-22 19:50:01 | INFO  | Task 30617089-7b0e-443a-86d8-d6bf6be7a005 is in state STARTED 2025-06-22 19:50:01.618771 | orchestrator | 2025-06-22 19:50:01 | INFO  | Task 259e6f48-b1f2-4431-8a4b-eb1e83e84d86 is in state STARTED 2025-06-22 19:50:01.621693 | orchestrator | 2025-06-22 19:50:01 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:50:01.621731 | orchestrator | 2025-06-22 19:50:01 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:50:04.699585 | orchestrator | 2025-06-22 19:50:04 | INFO  | Task b34682d8-8ca0-491c-a666-853ab38d78df is in state STARTED 2025-06-22 19:50:04.700525 | orchestrator | 2025-06-22 19:50:04 | INFO  | Task 6cf8e00e-92c6-4a54-a626-fb6881f79e1f is in state STARTED 2025-06-22 19:50:04.707723 | orchestrator | 2025-06-22 19:50:04 | INFO  | Task 397fd4b7-d778-4ed0-8c18-1fb1c5a7afee is in state STARTED 2025-06-22 19:50:04.707821 | orchestrator | 2025-06-22 19:50:04 | INFO  | Task 320391e4-19b9-4128-9901-587a2ad13d7c is in state STARTED 2025-06-22 19:50:04.707895 | orchestrator | 2025-06-22 19:50:04 | INFO  | Task 30617089-7b0e-443a-86d8-d6bf6be7a005 is in state STARTED 2025-06-22 19:50:04.708835 | orchestrator | 2025-06-22 19:50:04 | INFO  | Task 259e6f48-b1f2-4431-8a4b-eb1e83e84d86 is in state STARTED 2025-06-22 19:50:04.711968 | orchestrator | 2025-06-22 19:50:04 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:50:04.712015 | orchestrator | 2025-06-22 19:50:04 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:50:07.754748 | orchestrator | 2025-06-22 19:50:07 | INFO  | Task b34682d8-8ca0-491c-a666-853ab38d78df is in state STARTED 2025-06-22 19:50:07.754969 | orchestrator | 2025-06-22 19:50:07 | INFO  | Task 6cf8e00e-92c6-4a54-a626-fb6881f79e1f is in state STARTED 2025-06-22 19:50:07.755841 | orchestrator | 2025-06-22 19:50:07 | INFO  | Task 397fd4b7-d778-4ed0-8c18-1fb1c5a7afee is in state STARTED 2025-06-22 19:50:07.756522 | orchestrator | 2025-06-22 19:50:07 | INFO  | Task 320391e4-19b9-4128-9901-587a2ad13d7c is in state STARTED 2025-06-22 19:50:07.758889 | orchestrator | 2025-06-22 19:50:07 | INFO  | Task 30617089-7b0e-443a-86d8-d6bf6be7a005 is in state STARTED 2025-06-22 19:50:07.760060 | orchestrator | 2025-06-22 19:50:07 | INFO  | Task 259e6f48-b1f2-4431-8a4b-eb1e83e84d86 is in state STARTED 2025-06-22 19:50:07.767430 | orchestrator | 2025-06-22 19:50:07 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:50:07.767460 | orchestrator | 2025-06-22 19:50:07 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:50:10.841855 | orchestrator | 2025-06-22 19:50:10 | INFO  | Task b34682d8-8ca0-491c-a666-853ab38d78df is in state STARTED 2025-06-22 19:50:10.841942 | orchestrator | 2025-06-22 19:50:10 | INFO  | Task 6cf8e00e-92c6-4a54-a626-fb6881f79e1f is in state STARTED 2025-06-22 19:50:10.842306 | orchestrator | 2025-06-22 19:50:10 | INFO  | Task 397fd4b7-d778-4ed0-8c18-1fb1c5a7afee is in state STARTED 2025-06-22 19:50:10.843306 | orchestrator | 2025-06-22 19:50:10 | INFO  | Task 320391e4-19b9-4128-9901-587a2ad13d7c is in state STARTED 2025-06-22 19:50:10.845146 | orchestrator | 2025-06-22 19:50:10 | INFO  | Task 30617089-7b0e-443a-86d8-d6bf6be7a005 is in state STARTED 2025-06-22 19:50:10.845972 | orchestrator | 2025-06-22 19:50:10 | INFO  | Task 259e6f48-b1f2-4431-8a4b-eb1e83e84d86 is in state STARTED 2025-06-22 19:50:10.847095 | orchestrator | 2025-06-22 19:50:10 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:50:10.847134 | orchestrator | 2025-06-22 19:50:10 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:50:13.947474 | orchestrator | 2025-06-22 19:50:13 | INFO  | Task b34682d8-8ca0-491c-a666-853ab38d78df is in state STARTED 2025-06-22 19:50:13.950266 | orchestrator | 2025-06-22 19:50:13 | INFO  | Task 6cf8e00e-92c6-4a54-a626-fb6881f79e1f is in state STARTED 2025-06-22 19:50:13.957244 | orchestrator | 2025-06-22 19:50:13 | INFO  | Task 397fd4b7-d778-4ed0-8c18-1fb1c5a7afee is in state STARTED 2025-06-22 19:50:13.958148 | orchestrator | 2025-06-22 19:50:13 | INFO  | Task 320391e4-19b9-4128-9901-587a2ad13d7c is in state STARTED 2025-06-22 19:50:13.958405 | orchestrator | 2025-06-22 19:50:13 | INFO  | Task 30617089-7b0e-443a-86d8-d6bf6be7a005 is in state STARTED 2025-06-22 19:50:13.959718 | orchestrator | 2025-06-22 19:50:13 | INFO  | Task 259e6f48-b1f2-4431-8a4b-eb1e83e84d86 is in state STARTED 2025-06-22 19:50:13.962860 | orchestrator | 2025-06-22 19:50:13 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:50:13.962911 | orchestrator | 2025-06-22 19:50:13 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:50:17.032315 | orchestrator | 2025-06-22 19:50:17 | INFO  | Task b34682d8-8ca0-491c-a666-853ab38d78df is in state STARTED 2025-06-22 19:50:17.036359 | orchestrator | 2025-06-22 19:50:17 | INFO  | Task 6cf8e00e-92c6-4a54-a626-fb6881f79e1f is in state STARTED 2025-06-22 19:50:17.041737 | orchestrator | 2025-06-22 19:50:17 | INFO  | Task 397fd4b7-d778-4ed0-8c18-1fb1c5a7afee is in state STARTED 2025-06-22 19:50:17.044349 | orchestrator | 2025-06-22 19:50:17 | INFO  | Task 320391e4-19b9-4128-9901-587a2ad13d7c is in state STARTED 2025-06-22 19:50:17.049579 | orchestrator | 2025-06-22 19:50:17 | INFO  | Task 30617089-7b0e-443a-86d8-d6bf6be7a005 is in state STARTED 2025-06-22 19:50:17.055713 | orchestrator | 2025-06-22 19:50:17 | INFO  | Task 259e6f48-b1f2-4431-8a4b-eb1e83e84d86 is in state STARTED 2025-06-22 19:50:17.055752 | orchestrator | 2025-06-22 19:50:17 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:50:17.055766 | orchestrator | 2025-06-22 19:50:17 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:50:20.104718 | orchestrator | 2025-06-22 19:50:20 | INFO  | Task b34682d8-8ca0-491c-a666-853ab38d78df is in state STARTED 2025-06-22 19:50:20.106079 | orchestrator | 2025-06-22 19:50:20 | INFO  | Task 6cf8e00e-92c6-4a54-a626-fb6881f79e1f is in state STARTED 2025-06-22 19:50:20.107752 | orchestrator | 2025-06-22 19:50:20.107821 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-06-22 19:50:20.107845 | orchestrator | 2025-06-22 19:50:20.107864 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-06-22 19:50:20.107911 | orchestrator | Sunday 22 June 2025 19:49:59 +0000 (0:00:00.859) 0:00:00.859 *********** 2025-06-22 19:50:20.107934 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:50:20.107950 | orchestrator | changed: [testbed-manager] 2025-06-22 19:50:20.107967 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:50:20.107987 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:50:20.108006 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:50:20.108019 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:50:20.108030 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:50:20.108041 | orchestrator | 2025-06-22 19:50:20.108052 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-06-22 19:50:20.108063 | orchestrator | Sunday 22 June 2025 19:50:05 +0000 (0:00:06.263) 0:00:07.123 *********** 2025-06-22 19:50:20.108074 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-06-22 19:50:20.108085 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-06-22 19:50:20.108096 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-06-22 19:50:20.108106 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-06-22 19:50:20.108117 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-06-22 19:50:20.108128 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-06-22 19:50:20.108139 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-06-22 19:50:20.108150 | orchestrator | 2025-06-22 19:50:20.108160 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-06-22 19:50:20.108171 | orchestrator | Sunday 22 June 2025 19:50:08 +0000 (0:00:02.457) 0:00:09.580 *********** 2025-06-22 19:50:20.108188 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-22 19:50:06.850453', 'end': '2025-06-22 19:50:06.855741', 'delta': '0:00:00.005288', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-22 19:50:20.108251 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-22 19:50:06.871284', 'end': '2025-06-22 19:50:06.880102', 'delta': '0:00:00.008818', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-22 19:50:20.108275 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-22 19:50:06.855699', 'end': '2025-06-22 19:50:06.861550', 'delta': '0:00:00.005851', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-22 19:50:20.108342 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-22 19:50:06.861112', 'end': '2025-06-22 19:50:06.870205', 'delta': '0:00:00.009093', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-22 19:50:20.108356 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-22 19:50:06.865578', 'end': '2025-06-22 19:50:06.872401', 'delta': '0:00:00.006823', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-22 19:50:20.108369 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-22 19:50:07.398241', 'end': '2025-06-22 19:50:07.406886', 'delta': '0:00:00.008645', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-22 19:50:20.108394 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-06-22 19:50:07.650988', 'end': '2025-06-22 19:50:07.661115', 'delta': '0:00:00.010127', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-06-22 19:50:20.108418 | orchestrator | 2025-06-22 19:50:20.108431 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2025-06-22 19:50:20.108443 | orchestrator | Sunday 22 June 2025 19:50:11 +0000 (0:00:03.357) 0:00:12.942 *********** 2025-06-22 19:50:20.108455 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-06-22 19:50:20.108467 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-06-22 19:50:20.108479 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-06-22 19:50:20.108491 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-06-22 19:50:20.108520 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-06-22 19:50:20.108531 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-06-22 19:50:20.108542 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-06-22 19:50:20.108553 | orchestrator | 2025-06-22 19:50:20.108564 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-06-22 19:50:20.108575 | orchestrator | Sunday 22 June 2025 19:50:14 +0000 (0:00:02.790) 0:00:15.733 *********** 2025-06-22 19:50:20.108586 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-06-22 19:50:20.108597 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-06-22 19:50:20.108608 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-06-22 19:50:20.108619 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-06-22 19:50:20.108630 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-06-22 19:50:20.108640 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-06-22 19:50:20.108651 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-06-22 19:50:20.108662 | orchestrator | 2025-06-22 19:50:20.108673 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:50:20.108691 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:50:20.108704 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:50:20.108715 | orchestrator | testbed-node-1[0m : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:50:20.108727 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:50:20.108749 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:50:20.109030 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:50:20.109043 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:50:20.109054 | orchestrator | 2025-06-22 19:50:20.109065 | orchestrator | 2025-06-22 19:50:20.109076 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:50:20.109088 | orchestrator | Sunday 22 June 2025 19:50:18 +0000 (0:00:04.642) 0:00:20.376 *********** 2025-06-22 19:50:20.109099 | orchestrator | =============================================================================== 2025-06-22 19:50:20.109110 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 6.26s 2025-06-22 19:50:20.109121 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 4.64s 2025-06-22 19:50:20.109131 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 3.36s 2025-06-22 19:50:20.109142 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 2.79s 2025-06-22 19:50:20.109153 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 2.46s 2025-06-22 19:50:20.109198 | orchestrator | 2025-06-22 19:50:20 | INFO  | Task 397fd4b7-d778-4ed0-8c18-1fb1c5a7afee is in state STARTED 2025-06-22 19:50:20.109247 | orchestrator | 2025-06-22 19:50:20 | INFO  | Task 320391e4-19b9-4128-9901-587a2ad13d7c is in state SUCCESS 2025-06-22 19:50:20.109331 | orchestrator | 2025-06-22 19:50:20 | INFO  | Task 30617089-7b0e-443a-86d8-d6bf6be7a005 is in state STARTED 2025-06-22 19:50:20.110323 | orchestrator | 2025-06-22 19:50:20 | INFO  | Task 259e6f48-b1f2-4431-8a4b-eb1e83e84d86 is in state STARTED 2025-06-22 19:50:20.110360 | orchestrator | 2025-06-22 19:50:20 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:50:20.110372 | orchestrator | 2025-06-22 19:50:20 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:50:23.169048 | orchestrator | 2025-06-22 19:50:23 | INFO  | Task b34682d8-8ca0-491c-a666-853ab38d78df is in state STARTED 2025-06-22 19:50:23.169110 | orchestrator | 2025-06-22 19:50:23 | INFO  | Task 9026e2ed-4334-4392-b8b6-d8ee8826b021 is in state STARTED 2025-06-22 19:50:23.169118 | orchestrator | 2025-06-22 19:50:23 | INFO  | Task 6cf8e00e-92c6-4a54-a626-fb6881f79e1f is in state STARTED 2025-06-22 19:50:23.169125 | orchestrator | 2025-06-22 19:50:23 | INFO  | Task 397fd4b7-d778-4ed0-8c18-1fb1c5a7afee is in state STARTED 2025-06-22 19:50:23.173568 | orchestrator | 2025-06-22 19:50:23 | INFO  | Task 30617089-7b0e-443a-86d8-d6bf6be7a005 is in state STARTED 2025-06-22 19:50:23.178509 | orchestrator | 2025-06-22 19:50:23 | INFO  | Task 259e6f48-b1f2-4431-8a4b-eb1e83e84d86 is in state STARTED 2025-06-22 19:50:23.178530 | orchestrator | 2025-06-22 19:50:23 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:50:23.178538 | orchestrator | 2025-06-22 19:50:23 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:50:26.224847 | orchestrator | 2025-06-22 19:50:26 | INFO  | Task b34682d8-8ca0-491c-a666-853ab38d78df is in state STARTED 2025-06-22 19:50:26.227180 | orchestrator | 2025-06-22 19:50:26 | INFO  | Task 9026e2ed-4334-4392-b8b6-d8ee8826b021 is in state STARTED 2025-06-22 19:50:26.230298 | orchestrator | 2025-06-22 19:50:26 | INFO  | Task 6cf8e00e-92c6-4a54-a626-fb6881f79e1f is in state STARTED 2025-06-22 19:50:26.233543 | orchestrator | 2025-06-22 19:50:26 | INFO  | Task 397fd4b7-d778-4ed0-8c18-1fb1c5a7afee is in state STARTED 2025-06-22 19:50:26.235882 | orchestrator | 2025-06-22 19:50:26 | INFO  | Task 30617089-7b0e-443a-86d8-d6bf6be7a005 is in state STARTED 2025-06-22 19:50:26.238768 | orchestrator | 2025-06-22 19:50:26 | INFO  | Task 259e6f48-b1f2-4431-8a4b-eb1e83e84d86 is in state STARTED 2025-06-22 19:50:26.238800 | orchestrator | 2025-06-22 19:50:26 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:50:26.238813 | orchestrator | 2025-06-22 19:50:26 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:50:29.303836 | orchestrator | 2025-06-22 19:50:29 | INFO  | Task b34682d8-8ca0-491c-a666-853ab38d78df is in state STARTED 2025-06-22 19:50:29.305512 | orchestrator | 2025-06-22 19:50:29 | INFO  | Task 9026e2ed-4334-4392-b8b6-d8ee8826b021 is in state STARTED 2025-06-22 19:50:29.312202 | orchestrator | 2025-06-22 19:50:29 | INFO  | Task 6cf8e00e-92c6-4a54-a626-fb6881f79e1f is in state STARTED 2025-06-22 19:50:29.312946 | orchestrator | 2025-06-22 19:50:29 | INFO  | Task 397fd4b7-d778-4ed0-8c18-1fb1c5a7afee is in state STARTED 2025-06-22 19:50:29.314408 | orchestrator | 2025-06-22 19:50:29 | INFO  | Task 30617089-7b0e-443a-86d8-d6bf6be7a005 is in state STARTED 2025-06-22 19:50:29.316894 | orchestrator | 2025-06-22 19:50:29 | INFO  | Task 259e6f48-b1f2-4431-8a4b-eb1e83e84d86 is in state STARTED 2025-06-22 19:50:29.320865 | orchestrator | 2025-06-22 19:50:29 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:50:29.320897 | orchestrator | 2025-06-22 19:50:29 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:50:32.379943 | orchestrator | 2025-06-22 19:50:32 | INFO  | Task b34682d8-8ca0-491c-a666-853ab38d78df is in state STARTED 2025-06-22 19:50:32.381593 | orchestrator | 2025-06-22 19:50:32 | INFO  | Task 9026e2ed-4334-4392-b8b6-d8ee8826b021 is in state STARTED 2025-06-22 19:50:32.384622 | orchestrator | 2025-06-22 19:50:32 | INFO  | Task 6cf8e00e-92c6-4a54-a626-fb6881f79e1f is in state STARTED 2025-06-22 19:50:32.386445 | orchestrator | 2025-06-22 19:50:32 | INFO  | Task 397fd4b7-d778-4ed0-8c18-1fb1c5a7afee is in state STARTED 2025-06-22 19:50:32.388300 | orchestrator | 2025-06-22 19:50:32 | INFO  | Task 30617089-7b0e-443a-86d8-d6bf6be7a005 is in state STARTED 2025-06-22 19:50:32.390112 | orchestrator | 2025-06-22 19:50:32 | INFO  | Task 259e6f48-b1f2-4431-8a4b-eb1e83e84d86 is in state STARTED 2025-06-22 19:50:32.391310 | orchestrator | 2025-06-22 19:50:32 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:50:32.392013 | orchestrator | 2025-06-22 19:50:32 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:50:35.441407 | orchestrator | 2025-06-22 19:50:35 | INFO  | Task b34682d8-8ca0-491c-a666-853ab38d78df is in state STARTED 2025-06-22 19:50:35.441498 | orchestrator | 2025-06-22 19:50:35 | INFO  | Task 9026e2ed-4334-4392-b8b6-d8ee8826b021 is in state STARTED 2025-06-22 19:50:35.442793 | orchestrator | 2025-06-22 19:50:35 | INFO  | Task 6cf8e00e-92c6-4a54-a626-fb6881f79e1f is in state STARTED 2025-06-22 19:50:35.443547 | orchestrator | 2025-06-22 19:50:35 | INFO  | Task 397fd4b7-d778-4ed0-8c18-1fb1c5a7afee is in state STARTED 2025-06-22 19:50:35.444375 | orchestrator | 2025-06-22 19:50:35 | INFO  | Task 30617089-7b0e-443a-86d8-d6bf6be7a005 is in state STARTED 2025-06-22 19:50:35.444964 | orchestrator | 2025-06-22 19:50:35 | INFO  | Task 259e6f48-b1f2-4431-8a4b-eb1e83e84d86 is in state STARTED 2025-06-22 19:50:35.446136 | orchestrator | 2025-06-22 19:50:35 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:50:35.446417 | orchestrator | 2025-06-22 19:50:35 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:50:38.484973 | orchestrator | 2025-06-22 19:50:38 | INFO  | Task b34682d8-8ca0-491c-a666-853ab38d78df is in state STARTED 2025-06-22 19:50:38.485190 | orchestrator | 2025-06-22 19:50:38 | INFO  | Task 9026e2ed-4334-4392-b8b6-d8ee8826b021 is in state STARTED 2025-06-22 19:50:38.489012 | orchestrator | 2025-06-22 19:50:38 | INFO  | Task 6cf8e00e-92c6-4a54-a626-fb6881f79e1f is in state STARTED 2025-06-22 19:50:38.489068 | orchestrator | 2025-06-22 19:50:38 | INFO  | Task 397fd4b7-d778-4ed0-8c18-1fb1c5a7afee is in state STARTED 2025-06-22 19:50:38.489961 | orchestrator | 2025-06-22 19:50:38 | INFO  | Task 30617089-7b0e-443a-86d8-d6bf6be7a005 is in state SUCCESS 2025-06-22 19:50:38.494953 | orchestrator | 2025-06-22 19:50:38 | INFO  | Task 259e6f48-b1f2-4431-8a4b-eb1e83e84d86 is in state STARTED 2025-06-22 19:50:38.498917 | orchestrator | 2025-06-22 19:50:38 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:50:38.498946 | orchestrator | 2025-06-22 19:50:38 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:50:41.530310 | orchestrator | 2025-06-22 19:50:41 | INFO  | Task b34682d8-8ca0-491c-a666-853ab38d78df is in state STARTED 2025-06-22 19:50:41.530680 | orchestrator | 2025-06-22 19:50:41 | INFO  | Task 9026e2ed-4334-4392-b8b6-d8ee8826b021 is in state STARTED 2025-06-22 19:50:41.532289 | orchestrator | 2025-06-22 19:50:41 | INFO  | Task 6cf8e00e-92c6-4a54-a626-fb6881f79e1f is in state STARTED 2025-06-22 19:50:41.532771 | orchestrator | 2025-06-22 19:50:41 | INFO  | Task 397fd4b7-d778-4ed0-8c18-1fb1c5a7afee is in state STARTED 2025-06-22 19:50:41.533696 | orchestrator | 2025-06-22 19:50:41 | INFO  | Task 259e6f48-b1f2-4431-8a4b-eb1e83e84d86 is in state STARTED 2025-06-22 19:50:41.534764 | orchestrator | 2025-06-22 19:50:41 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:50:41.534809 | orchestrator | 2025-06-22 19:50:41 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:50:44.579956 | orchestrator | 2025-06-22 19:50:44 | INFO  | Task b34682d8-8ca0-491c-a666-853ab38d78df is in state STARTED 2025-06-22 19:50:44.583125 | orchestrator | 2025-06-22 19:50:44 | INFO  | Task 9026e2ed-4334-4392-b8b6-d8ee8826b021 is in state STARTED 2025-06-22 19:50:44.587290 | orchestrator | 2025-06-22 19:50:44 | INFO  | Task 6cf8e00e-92c6-4a54-a626-fb6881f79e1f is in state STARTED 2025-06-22 19:50:44.590202 | orchestrator | 2025-06-22 19:50:44 | INFO  | Task 397fd4b7-d778-4ed0-8c18-1fb1c5a7afee is in state STARTED 2025-06-22 19:50:44.591080 | orchestrator | 2025-06-22 19:50:44 | INFO  | Task 259e6f48-b1f2-4431-8a4b-eb1e83e84d86 is in state STARTED 2025-06-22 19:50:44.592810 | orchestrator | 2025-06-22 19:50:44 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:50:44.592833 | orchestrator | 2025-06-22 19:50:44 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:50:47.639818 | orchestrator | 2025-06-22 19:50:47 | INFO  | Task b34682d8-8ca0-491c-a666-853ab38d78df is in state STARTED 2025-06-22 19:50:47.639905 | orchestrator | 2025-06-22 19:50:47 | INFO  | Task 9026e2ed-4334-4392-b8b6-d8ee8826b021 is in state STARTED 2025-06-22 19:50:47.641004 | orchestrator | 2025-06-22 19:50:47 | INFO  | Task 6cf8e00e-92c6-4a54-a626-fb6881f79e1f is in state STARTED 2025-06-22 19:50:47.642583 | orchestrator | 2025-06-22 19:50:47 | INFO  | Task 397fd4b7-d778-4ed0-8c18-1fb1c5a7afee is in state STARTED 2025-06-22 19:50:47.646369 | orchestrator | 2025-06-22 19:50:47 | INFO  | Task 259e6f48-b1f2-4431-8a4b-eb1e83e84d86 is in state STARTED 2025-06-22 19:50:47.647189 | orchestrator | 2025-06-22 19:50:47 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:50:47.647239 | orchestrator | 2025-06-22 19:50:47 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:50:50.699699 | orchestrator | 2025-06-22 19:50:50 | INFO  | Task b34682d8-8ca0-491c-a666-853ab38d78df is in state STARTED 2025-06-22 19:50:50.706124 | orchestrator | 2025-06-22 19:50:50 | INFO  | Task 9026e2ed-4334-4392-b8b6-d8ee8826b021 is in state STARTED 2025-06-22 19:50:50.706444 | orchestrator | 2025-06-22 19:50:50 | INFO  | Task 6cf8e00e-92c6-4a54-a626-fb6881f79e1f is in state STARTED 2025-06-22 19:50:50.707289 | orchestrator | 2025-06-22 19:50:50 | INFO  | Task 397fd4b7-d778-4ed0-8c18-1fb1c5a7afee is in state STARTED 2025-06-22 19:50:50.707903 | orchestrator | 2025-06-22 19:50:50 | INFO  | Task 259e6f48-b1f2-4431-8a4b-eb1e83e84d86 is in state STARTED 2025-06-22 19:50:50.708713 | orchestrator | 2025-06-22 19:50:50 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:50:50.711544 | orchestrator | 2025-06-22 19:50:50 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:50:53.791678 | orchestrator | 2025-06-22 19:50:53 | INFO  | Task b34682d8-8ca0-491c-a666-853ab38d78df is in state SUCCESS 2025-06-22 19:50:53.797978 | orchestrator | 2025-06-22 19:50:53 | INFO  | Task 9026e2ed-4334-4392-b8b6-d8ee8826b021 is in state STARTED 2025-06-22 19:50:53.798114 | orchestrator | 2025-06-22 19:50:53 | INFO  | Task 6cf8e00e-92c6-4a54-a626-fb6881f79e1f is in state STARTED 2025-06-22 19:50:53.799402 | orchestrator | 2025-06-22 19:50:53 | INFO  | Task 397fd4b7-d778-4ed0-8c18-1fb1c5a7afee is in state STARTED 2025-06-22 19:50:53.799582 | orchestrator | 2025-06-22 19:50:53 | INFO  | Task 259e6f48-b1f2-4431-8a4b-eb1e83e84d86 is in state STARTED 2025-06-22 19:50:53.801897 | orchestrator | 2025-06-22 19:50:53 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:50:53.801919 | orchestrator | 2025-06-22 19:50:53 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:50:56.851956 | orchestrator | 2025-06-22 19:50:56 | INFO  | Task 9026e2ed-4334-4392-b8b6-d8ee8826b021 is in state STARTED 2025-06-22 19:50:56.852057 | orchestrator | 2025-06-22 19:50:56 | INFO  | Task 6cf8e00e-92c6-4a54-a626-fb6881f79e1f is in state STARTED 2025-06-22 19:50:56.856145 | orchestrator | 2025-06-22 19:50:56 | INFO  | Task 397fd4b7-d778-4ed0-8c18-1fb1c5a7afee is in state STARTED 2025-06-22 19:50:56.856202 | orchestrator | 2025-06-22 19:50:56 | INFO  | Task 259e6f48-b1f2-4431-8a4b-eb1e83e84d86 is in state STARTED 2025-06-22 19:50:56.859040 | orchestrator | 2025-06-22 19:50:56 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:50:56.859098 | orchestrator | 2025-06-22 19:50:56 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:50:59.902287 | orchestrator | 2025-06-22 19:50:59 | INFO  | Task 9026e2ed-4334-4392-b8b6-d8ee8826b021 is in state STARTED 2025-06-22 19:50:59.902496 | orchestrator | 2025-06-22 19:50:59 | INFO  | Task 6cf8e00e-92c6-4a54-a626-fb6881f79e1f is in state STARTED 2025-06-22 19:50:59.905557 | orchestrator | 2025-06-22 19:50:59 | INFO  | Task 397fd4b7-d778-4ed0-8c18-1fb1c5a7afee is in state STARTED 2025-06-22 19:50:59.910749 | orchestrator | 2025-06-22 19:50:59 | INFO  | Task 259e6f48-b1f2-4431-8a4b-eb1e83e84d86 is in state STARTED 2025-06-22 19:50:59.911998 | orchestrator | 2025-06-22 19:50:59 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:50:59.912043 | orchestrator | 2025-06-22 19:50:59 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:51:02.963931 | orchestrator | 2025-06-22 19:51:02 | INFO  | Task 9026e2ed-4334-4392-b8b6-d8ee8826b021 is in state STARTED 2025-06-22 19:51:02.964013 | orchestrator | 2025-06-22 19:51:02 | INFO  | Task 6cf8e00e-92c6-4a54-a626-fb6881f79e1f is in state STARTED 2025-06-22 19:51:02.964027 | orchestrator | 2025-06-22 19:51:02 | INFO  | Task 397fd4b7-d778-4ed0-8c18-1fb1c5a7afee is in state STARTED 2025-06-22 19:51:02.964038 | orchestrator | 2025-06-22 19:51:02 | INFO  | Task 259e6f48-b1f2-4431-8a4b-eb1e83e84d86 is in state STARTED 2025-06-22 19:51:02.964050 | orchestrator | 2025-06-22 19:51:02 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:51:02.964061 | orchestrator | 2025-06-22 19:51:02 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:51:06.030271 | orchestrator | 2025-06-22 19:51:06 | INFO  | Task 9026e2ed-4334-4392-b8b6-d8ee8826b021 is in state STARTED 2025-06-22 19:51:06.031314 | orchestrator | 2025-06-22 19:51:06 | INFO  | Task 6cf8e00e-92c6-4a54-a626-fb6881f79e1f is in state STARTED 2025-06-22 19:51:06.033332 | orchestrator | 2025-06-22 19:51:06 | INFO  | Task 397fd4b7-d778-4ed0-8c18-1fb1c5a7afee is in state STARTED 2025-06-22 19:51:06.034164 | orchestrator | 2025-06-22 19:51:06 | INFO  | Task 259e6f48-b1f2-4431-8a4b-eb1e83e84d86 is in state STARTED 2025-06-22 19:51:06.037280 | orchestrator | 2025-06-22 19:51:06 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:51:06.037306 | orchestrator | 2025-06-22 19:51:06 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:51:09.070853 | orchestrator | 2025-06-22 19:51:09 | INFO  | Task 9026e2ed-4334-4392-b8b6-d8ee8826b021 is in state STARTED 2025-06-22 19:51:09.071304 | orchestrator | 2025-06-22 19:51:09 | INFO  | Task 6cf8e00e-92c6-4a54-a626-fb6881f79e1f is in state STARTED 2025-06-22 19:51:09.072118 | orchestrator | 2025-06-22 19:51:09 | INFO  | Task 397fd4b7-d778-4ed0-8c18-1fb1c5a7afee is in state STARTED 2025-06-22 19:51:09.072746 | orchestrator | 2025-06-22 19:51:09 | INFO  | Task 259e6f48-b1f2-4431-8a4b-eb1e83e84d86 is in state STARTED 2025-06-22 19:51:09.074386 | orchestrator | 2025-06-22 19:51:09 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:51:09.074414 | orchestrator | 2025-06-22 19:51:09 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:51:12.105406 | orchestrator | 2025-06-22 19:51:12 | INFO  | Task 9026e2ed-4334-4392-b8b6-d8ee8826b021 is in state STARTED 2025-06-22 19:51:12.107049 | orchestrator | 2025-06-22 19:51:12 | INFO  | Task 6cf8e00e-92c6-4a54-a626-fb6881f79e1f is in state STARTED 2025-06-22 19:51:12.107090 | orchestrator | 2025-06-22 19:51:12 | INFO  | Task 397fd4b7-d778-4ed0-8c18-1fb1c5a7afee is in state STARTED 2025-06-22 19:51:12.108272 | orchestrator | 2025-06-22 19:51:12 | INFO  | Task 259e6f48-b1f2-4431-8a4b-eb1e83e84d86 is in state STARTED 2025-06-22 19:51:12.110998 | orchestrator | 2025-06-22 19:51:12 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:51:12.111044 | orchestrator | 2025-06-22 19:51:12 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:51:15.143777 | orchestrator | 2025-06-22 19:51:15 | INFO  | Task 9026e2ed-4334-4392-b8b6-d8ee8826b021 is in state STARTED 2025-06-22 19:51:15.146121 | orchestrator | 2025-06-22 19:51:15 | INFO  | Task 6cf8e00e-92c6-4a54-a626-fb6881f79e1f is in state STARTED 2025-06-22 19:51:15.147589 | orchestrator | 2025-06-22 19:51:15 | INFO  | Task 397fd4b7-d778-4ed0-8c18-1fb1c5a7afee is in state SUCCESS 2025-06-22 19:51:15.148787 | orchestrator | 2025-06-22 19:51:15.148820 | orchestrator | 2025-06-22 19:51:15.148830 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-06-22 19:51:15.148840 | orchestrator | 2025-06-22 19:51:15.148849 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-06-22 19:51:15.148858 | orchestrator | Sunday 22 June 2025 19:50:00 +0000 (0:00:01.033) 0:00:01.033 *********** 2025-06-22 19:51:15.148867 | orchestrator | ok: [testbed-manager] => { 2025-06-22 19:51:15.148877 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-06-22 19:51:15.148887 | orchestrator | } 2025-06-22 19:51:15.148896 | orchestrator | 2025-06-22 19:51:15.148905 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-06-22 19:51:15.148914 | orchestrator | Sunday 22 June 2025 19:50:00 +0000 (0:00:00.872) 0:00:01.905 *********** 2025-06-22 19:51:15.148923 | orchestrator | ok: [testbed-manager] 2025-06-22 19:51:15.148933 | orchestrator | 2025-06-22 19:51:15.148942 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-06-22 19:51:15.148951 | orchestrator | Sunday 22 June 2025 19:50:03 +0000 (0:00:02.223) 0:00:04.129 *********** 2025-06-22 19:51:15.148959 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-06-22 19:51:15.148968 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-06-22 19:51:15.148977 | orchestrator | 2025-06-22 19:51:15.148986 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-06-22 19:51:15.148995 | orchestrator | Sunday 22 June 2025 19:50:04 +0000 (0:00:01.388) 0:00:05.518 *********** 2025-06-22 19:51:15.149003 | orchestrator | changed: [testbed-manager] 2025-06-22 19:51:15.149012 | orchestrator | 2025-06-22 19:51:15.149021 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-06-22 19:51:15.149030 | orchestrator | Sunday 22 June 2025 19:50:08 +0000 (0:00:03.815) 0:00:09.333 *********** 2025-06-22 19:51:15.149039 | orchestrator | changed: [testbed-manager] 2025-06-22 19:51:15.149048 | orchestrator | 2025-06-22 19:51:15.149057 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-06-22 19:51:15.149066 | orchestrator | Sunday 22 June 2025 19:50:10 +0000 (0:00:02.598) 0:00:11.932 *********** 2025-06-22 19:51:15.149075 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-06-22 19:51:15.149099 | orchestrator | ok: [testbed-manager] 2025-06-22 19:51:15.149109 | orchestrator | 2025-06-22 19:51:15.149117 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-06-22 19:51:15.149126 | orchestrator | Sunday 22 June 2025 19:50:36 +0000 (0:00:25.547) 0:00:37.480 *********** 2025-06-22 19:51:15.149134 | orchestrator | changed: [testbed-manager] 2025-06-22 19:51:15.149143 | orchestrator | 2025-06-22 19:51:15.149152 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:51:15.149161 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:51:15.149170 | orchestrator | 2025-06-22 19:51:15.149179 | orchestrator | 2025-06-22 19:51:15.149187 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:51:15.149196 | orchestrator | Sunday 22 June 2025 19:50:38 +0000 (0:00:01.486) 0:00:38.966 *********** 2025-06-22 19:51:15.149205 | orchestrator | =============================================================================== 2025-06-22 19:51:15.149213 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 25.55s 2025-06-22 19:51:15.149222 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 3.82s 2025-06-22 19:51:15.149251 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 2.60s 2025-06-22 19:51:15.149267 | orchestrator | osism.services.homer : Create traefik external network ------------------ 2.22s 2025-06-22 19:51:15.149282 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 1.49s 2025-06-22 19:51:15.149298 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.39s 2025-06-22 19:51:15.149312 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.87s 2025-06-22 19:51:15.149324 | orchestrator | 2025-06-22 19:51:15.149333 | orchestrator | 2025-06-22 19:51:15.149342 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-06-22 19:51:15.149350 | orchestrator | 2025-06-22 19:51:15.149359 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-06-22 19:51:15.149368 | orchestrator | Sunday 22 June 2025 19:49:59 +0000 (0:00:01.184) 0:00:01.184 *********** 2025-06-22 19:51:15.149382 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-06-22 19:51:15.149392 | orchestrator | 2025-06-22 19:51:15.149401 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-06-22 19:51:15.149410 | orchestrator | Sunday 22 June 2025 19:50:00 +0000 (0:00:00.945) 0:00:02.130 *********** 2025-06-22 19:51:15.149419 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-06-22 19:51:15.149427 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-06-22 19:51:15.149436 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-06-22 19:51:15.149445 | orchestrator | 2025-06-22 19:51:15.149454 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-06-22 19:51:15.149462 | orchestrator | Sunday 22 June 2025 19:50:03 +0000 (0:00:02.290) 0:00:04.420 *********** 2025-06-22 19:51:15.149471 | orchestrator | changed: [testbed-manager] 2025-06-22 19:51:15.149480 | orchestrator | 2025-06-22 19:51:15.149488 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-06-22 19:51:15.149497 | orchestrator | Sunday 22 June 2025 19:50:05 +0000 (0:00:02.369) 0:00:06.790 *********** 2025-06-22 19:51:15.149516 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-06-22 19:51:15.149525 | orchestrator | ok: [testbed-manager] 2025-06-22 19:51:15.149534 | orchestrator | 2025-06-22 19:51:15.149543 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-06-22 19:51:15.149552 | orchestrator | Sunday 22 June 2025 19:50:46 +0000 (0:00:40.898) 0:00:47.688 *********** 2025-06-22 19:51:15.149560 | orchestrator | changed: [testbed-manager] 2025-06-22 19:51:15.149575 | orchestrator | 2025-06-22 19:51:15.149584 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-06-22 19:51:15.149593 | orchestrator | Sunday 22 June 2025 19:50:47 +0000 (0:00:00.835) 0:00:48.523 *********** 2025-06-22 19:51:15.149602 | orchestrator | ok: [testbed-manager] 2025-06-22 19:51:15.149611 | orchestrator | 2025-06-22 19:51:15.149619 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-06-22 19:51:15.149628 | orchestrator | Sunday 22 June 2025 19:50:47 +0000 (0:00:00.677) 0:00:49.201 *********** 2025-06-22 19:51:15.149637 | orchestrator | changed: [testbed-manager] 2025-06-22 19:51:15.149646 | orchestrator | 2025-06-22 19:51:15.149654 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-06-22 19:51:15.149663 | orchestrator | Sunday 22 June 2025 19:50:49 +0000 (0:00:01.781) 0:00:50.983 *********** 2025-06-22 19:51:15.149672 | orchestrator | changed: [testbed-manager] 2025-06-22 19:51:15.149680 | orchestrator | 2025-06-22 19:51:15.149689 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-06-22 19:51:15.149698 | orchestrator | Sunday 22 June 2025 19:50:50 +0000 (0:00:01.047) 0:00:52.030 *********** 2025-06-22 19:51:15.149707 | orchestrator | changed: [testbed-manager] 2025-06-22 19:51:15.149715 | orchestrator | 2025-06-22 19:51:15.149724 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-06-22 19:51:15.149733 | orchestrator | Sunday 22 June 2025 19:50:51 +0000 (0:00:00.651) 0:00:52.682 *********** 2025-06-22 19:51:15.149741 | orchestrator | ok: [testbed-manager] 2025-06-22 19:51:15.149750 | orchestrator | 2025-06-22 19:51:15.149759 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:51:15.149768 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:51:15.149776 | orchestrator | 2025-06-22 19:51:15.149785 | orchestrator | 2025-06-22 19:51:15.149794 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:51:15.149803 | orchestrator | Sunday 22 June 2025 19:50:51 +0000 (0:00:00.408) 0:00:53.090 *********** 2025-06-22 19:51:15.149811 | orchestrator | =============================================================================== 2025-06-22 19:51:15.149820 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 40.90s 2025-06-22 19:51:15.149829 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.37s 2025-06-22 19:51:15.149837 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.29s 2025-06-22 19:51:15.149846 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 1.78s 2025-06-22 19:51:15.149855 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.05s 2025-06-22 19:51:15.149863 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.95s 2025-06-22 19:51:15.149872 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 0.84s 2025-06-22 19:51:15.149881 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.68s 2025-06-22 19:51:15.149889 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.65s 2025-06-22 19:51:15.149898 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.41s 2025-06-22 19:51:15.149907 | orchestrator | 2025-06-22 19:51:15.149915 | orchestrator | 2025-06-22 19:51:15.149924 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 19:51:15.149933 | orchestrator | 2025-06-22 19:51:15.149941 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 19:51:15.149950 | orchestrator | Sunday 22 June 2025 19:49:58 +0000 (0:00:00.494) 0:00:00.499 *********** 2025-06-22 19:51:15.149959 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-06-22 19:51:15.149967 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-06-22 19:51:15.149976 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-06-22 19:51:15.149989 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-06-22 19:51:15.149998 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-06-22 19:51:15.150007 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-06-22 19:51:15.150130 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-06-22 19:51:15.150145 | orchestrator | 2025-06-22 19:51:15.150154 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-06-22 19:51:15.150163 | orchestrator | 2025-06-22 19:51:15.150172 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-06-22 19:51:15.150207 | orchestrator | Sunday 22 June 2025 19:50:02 +0000 (0:00:03.817) 0:00:04.317 *********** 2025-06-22 19:51:15.150227 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 19:51:15.150259 | orchestrator | 2025-06-22 19:51:15.150268 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-06-22 19:51:15.150277 | orchestrator | Sunday 22 June 2025 19:50:06 +0000 (0:00:04.417) 0:00:08.734 *********** 2025-06-22 19:51:15.150286 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:51:15.150295 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:51:15.150304 | orchestrator | ok: [testbed-manager] 2025-06-22 19:51:15.150312 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:51:15.150321 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:51:15.150337 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:51:15.150346 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:51:15.150355 | orchestrator | 2025-06-22 19:51:15.150364 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-06-22 19:51:15.150373 | orchestrator | Sunday 22 June 2025 19:50:09 +0000 (0:00:02.996) 0:00:11.731 *********** 2025-06-22 19:51:15.150382 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:51:15.150391 | orchestrator | ok: [testbed-manager] 2025-06-22 19:51:15.150399 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:51:15.150408 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:51:15.150417 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:51:15.150425 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:51:15.150434 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:51:15.150443 | orchestrator | 2025-06-22 19:51:15.150451 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-06-22 19:51:15.150460 | orchestrator | Sunday 22 June 2025 19:50:14 +0000 (0:00:05.229) 0:00:16.961 *********** 2025-06-22 19:51:15.150469 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:51:15.150477 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:51:15.150486 | orchestrator | changed: [testbed-manager] 2025-06-22 19:51:15.150495 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:51:15.150504 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:51:15.150512 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:51:15.150521 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:51:15.150529 | orchestrator | 2025-06-22 19:51:15.150538 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-06-22 19:51:15.150547 | orchestrator | Sunday 22 June 2025 19:50:19 +0000 (0:00:04.321) 0:00:21.282 *********** 2025-06-22 19:51:15.150556 | orchestrator | changed: [testbed-manager] 2025-06-22 19:51:15.150564 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:51:15.150573 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:51:15.150581 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:51:15.150590 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:51:15.150599 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:51:15.150607 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:51:15.150616 | orchestrator | 2025-06-22 19:51:15.150625 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-06-22 19:51:15.150634 | orchestrator | Sunday 22 June 2025 19:50:29 +0000 (0:00:10.656) 0:00:31.939 *********** 2025-06-22 19:51:15.150653 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:51:15.150662 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:51:15.150671 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:51:15.150679 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:51:15.150688 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:51:15.150696 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:51:15.150705 | orchestrator | changed: [testbed-manager] 2025-06-22 19:51:15.150714 | orchestrator | 2025-06-22 19:51:15.150722 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-06-22 19:51:15.150731 | orchestrator | Sunday 22 June 2025 19:50:51 +0000 (0:00:21.433) 0:00:53.372 *********** 2025-06-22 19:51:15.150740 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 19:51:15.150750 | orchestrator | 2025-06-22 19:51:15.150759 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-06-22 19:51:15.150768 | orchestrator | Sunday 22 June 2025 19:50:53 +0000 (0:00:01.614) 0:00:54.987 *********** 2025-06-22 19:51:15.150776 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-06-22 19:51:15.150785 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-06-22 19:51:15.150794 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-06-22 19:51:15.150802 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-06-22 19:51:15.150811 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-06-22 19:51:15.150820 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-06-22 19:51:15.150828 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-06-22 19:51:15.150837 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-06-22 19:51:15.150846 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-06-22 19:51:15.150854 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-06-22 19:51:15.150863 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-06-22 19:51:15.150871 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-06-22 19:51:15.150880 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-06-22 19:51:15.150888 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-06-22 19:51:15.150897 | orchestrator | 2025-06-22 19:51:15.150910 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-06-22 19:51:15.150919 | orchestrator | Sunday 22 June 2025 19:50:59 +0000 (0:00:06.163) 0:01:01.151 *********** 2025-06-22 19:51:15.150928 | orchestrator | ok: [testbed-manager] 2025-06-22 19:51:15.150937 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:51:15.150946 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:51:15.150954 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:51:15.150963 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:51:15.150972 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:51:15.150980 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:51:15.150989 | orchestrator | 2025-06-22 19:51:15.150997 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-06-22 19:51:15.151006 | orchestrator | Sunday 22 June 2025 19:51:00 +0000 (0:00:01.329) 0:01:02.480 *********** 2025-06-22 19:51:15.151015 | orchestrator | changed: [testbed-manager] 2025-06-22 19:51:15.151023 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:51:15.151032 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:51:15.151041 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:51:15.151049 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:51:15.151058 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:51:15.151067 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:51:15.151075 | orchestrator | 2025-06-22 19:51:15.151084 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-06-22 19:51:15.151098 | orchestrator | Sunday 22 June 2025 19:51:02 +0000 (0:00:01.698) 0:01:04.179 *********** 2025-06-22 19:51:15.151112 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:51:15.151121 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:51:15.151130 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:51:15.151138 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:51:15.151147 | orchestrator | ok: [testbed-manager] 2025-06-22 19:51:15.151155 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:51:15.151164 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:51:15.151173 | orchestrator | 2025-06-22 19:51:15.151182 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-06-22 19:51:15.151190 | orchestrator | Sunday 22 June 2025 19:51:04 +0000 (0:00:01.835) 0:01:06.015 *********** 2025-06-22 19:51:15.151199 | orchestrator | ok: [testbed-manager] 2025-06-22 19:51:15.151208 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:51:15.151216 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:51:15.151225 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:51:15.151280 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:51:15.151290 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:51:15.151299 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:51:15.151307 | orchestrator | 2025-06-22 19:51:15.151316 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-06-22 19:51:15.151325 | orchestrator | Sunday 22 June 2025 19:51:06 +0000 (0:00:02.182) 0:01:08.197 *********** 2025-06-22 19:51:15.151334 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-06-22 19:51:15.151344 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 19:51:15.151353 | orchestrator | 2025-06-22 19:51:15.151362 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-06-22 19:51:15.151370 | orchestrator | Sunday 22 June 2025 19:51:07 +0000 (0:00:01.271) 0:01:09.469 *********** 2025-06-22 19:51:15.151379 | orchestrator | changed: [testbed-manager] 2025-06-22 19:51:15.151388 | orchestrator | 2025-06-22 19:51:15.151397 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-06-22 19:51:15.151405 | orchestrator | Sunday 22 June 2025 19:51:09 +0000 (0:00:01.672) 0:01:11.141 *********** 2025-06-22 19:51:15.151414 | orchestrator | changed: [testbed-manager] 2025-06-22 19:51:15.151423 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:51:15.151432 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:51:15.151440 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:51:15.151449 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:51:15.151457 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:51:15.151466 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:51:15.151474 | orchestrator | 2025-06-22 19:51:15.151483 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:51:15.151492 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:51:15.151501 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:51:15.151510 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:51:15.151519 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:51:15.151528 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:51:15.151537 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:51:15.151550 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:51:15.151559 | orchestrator | 2025-06-22 19:51:15.151568 | orchestrator | 2025-06-22 19:51:15.151577 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:51:15.151586 | orchestrator | Sunday 22 June 2025 19:51:12 +0000 (0:00:03.703) 0:01:14.845 *********** 2025-06-22 19:51:15.151598 | orchestrator | =============================================================================== 2025-06-22 19:51:15.151607 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 21.43s 2025-06-22 19:51:15.151616 | orchestrator | osism.services.netdata : Add repository -------------------------------- 10.66s 2025-06-22 19:51:15.151624 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 6.16s 2025-06-22 19:51:15.151633 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 5.23s 2025-06-22 19:51:15.151642 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 4.42s 2025-06-22 19:51:15.151651 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 4.32s 2025-06-22 19:51:15.151659 | orchestrator | Group hosts based on enabled services ----------------------------------- 3.81s 2025-06-22 19:51:15.151668 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.70s 2025-06-22 19:51:15.151676 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 3.00s 2025-06-22 19:51:15.151685 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.18s 2025-06-22 19:51:15.151694 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.84s 2025-06-22 19:51:15.151708 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.70s 2025-06-22 19:51:15.151718 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 1.67s 2025-06-22 19:51:15.151726 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.61s 2025-06-22 19:51:15.151735 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.33s 2025-06-22 19:51:15.151744 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.27s 2025-06-22 19:51:15.151753 | orchestrator | 2025-06-22 19:51:15 | INFO  | Task 259e6f48-b1f2-4431-8a4b-eb1e83e84d86 is in state STARTED 2025-06-22 19:51:15.152058 | orchestrator | 2025-06-22 19:51:15 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:51:15.152112 | orchestrator | 2025-06-22 19:51:15 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:51:18.186210 | orchestrator | 2025-06-22 19:51:18 | INFO  | Task 9026e2ed-4334-4392-b8b6-d8ee8826b021 is in state STARTED 2025-06-22 19:51:18.188855 | orchestrator | 2025-06-22 19:51:18 | INFO  | Task 6cf8e00e-92c6-4a54-a626-fb6881f79e1f is in state STARTED 2025-06-22 19:51:18.190299 | orchestrator | 2025-06-22 19:51:18 | INFO  | Task 259e6f48-b1f2-4431-8a4b-eb1e83e84d86 is in state STARTED 2025-06-22 19:51:18.191531 | orchestrator | 2025-06-22 19:51:18 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:51:18.191555 | orchestrator | 2025-06-22 19:51:18 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:51:21.240548 | orchestrator | 2025-06-22 19:51:21 | INFO  | Task 9026e2ed-4334-4392-b8b6-d8ee8826b021 is in state STARTED 2025-06-22 19:51:21.243193 | orchestrator | 2025-06-22 19:51:21 | INFO  | Task 6cf8e00e-92c6-4a54-a626-fb6881f79e1f is in state STARTED 2025-06-22 19:51:21.243222 | orchestrator | 2025-06-22 19:51:21 | INFO  | Task 259e6f48-b1f2-4431-8a4b-eb1e83e84d86 is in state STARTED 2025-06-22 19:51:21.245545 | orchestrator | 2025-06-22 19:51:21 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:51:21.245570 | orchestrator | 2025-06-22 19:51:21 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:51:24.281058 | orchestrator | 2025-06-22 19:51:24 | INFO  | Task 9026e2ed-4334-4392-b8b6-d8ee8826b021 is in state STARTED 2025-06-22 19:51:24.282980 | orchestrator | 2025-06-22 19:51:24 | INFO  | Task 6cf8e00e-92c6-4a54-a626-fb6881f79e1f is in state STARTED 2025-06-22 19:51:24.283218 | orchestrator | 2025-06-22 19:51:24 | INFO  | Task 259e6f48-b1f2-4431-8a4b-eb1e83e84d86 is in state STARTED 2025-06-22 19:51:24.285455 | orchestrator | 2025-06-22 19:51:24 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:51:24.285474 | orchestrator | 2025-06-22 19:51:24 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:51:27.318224 | orchestrator | 2025-06-22 19:51:27 | INFO  | Task 9026e2ed-4334-4392-b8b6-d8ee8826b021 is in state STARTED 2025-06-22 19:51:27.319952 | orchestrator | 2025-06-22 19:51:27 | INFO  | Task 6cf8e00e-92c6-4a54-a626-fb6881f79e1f is in state STARTED 2025-06-22 19:51:27.323157 | orchestrator | 2025-06-22 19:51:27 | INFO  | Task 259e6f48-b1f2-4431-8a4b-eb1e83e84d86 is in state STARTED 2025-06-22 19:51:27.325285 | orchestrator | 2025-06-22 19:51:27 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:51:27.325651 | orchestrator | 2025-06-22 19:51:27 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:51:30.377577 | orchestrator | 2025-06-22 19:51:30 | INFO  | Task 9026e2ed-4334-4392-b8b6-d8ee8826b021 is in state STARTED 2025-06-22 19:51:30.379178 | orchestrator | 2025-06-22 19:51:30 | INFO  | Task 6cf8e00e-92c6-4a54-a626-fb6881f79e1f is in state STARTED 2025-06-22 19:51:30.379475 | orchestrator | 2025-06-22 19:51:30 | INFO  | Task 259e6f48-b1f2-4431-8a4b-eb1e83e84d86 is in state STARTED 2025-06-22 19:51:30.381077 | orchestrator | 2025-06-22 19:51:30 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:51:30.381103 | orchestrator | 2025-06-22 19:51:30 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:51:33.423759 | orchestrator | 2025-06-22 19:51:33 | INFO  | Task 9026e2ed-4334-4392-b8b6-d8ee8826b021 is in state STARTED 2025-06-22 19:51:33.425445 | orchestrator | 2025-06-22 19:51:33 | INFO  | Task 6cf8e00e-92c6-4a54-a626-fb6881f79e1f is in state STARTED 2025-06-22 19:51:33.426983 | orchestrator | 2025-06-22 19:51:33 | INFO  | Task 259e6f48-b1f2-4431-8a4b-eb1e83e84d86 is in state STARTED 2025-06-22 19:51:33.428327 | orchestrator | 2025-06-22 19:51:33 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:51:33.428456 | orchestrator | 2025-06-22 19:51:33 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:51:36.472965 | orchestrator | 2025-06-22 19:51:36 | INFO  | Task 9026e2ed-4334-4392-b8b6-d8ee8826b021 is in state STARTED 2025-06-22 19:51:36.473937 | orchestrator | 2025-06-22 19:51:36 | INFO  | Task 6cf8e00e-92c6-4a54-a626-fb6881f79e1f is in state STARTED 2025-06-22 19:51:36.476340 | orchestrator | 2025-06-22 19:51:36 | INFO  | Task 259e6f48-b1f2-4431-8a4b-eb1e83e84d86 is in state STARTED 2025-06-22 19:51:36.476757 | orchestrator | 2025-06-22 19:51:36 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:51:36.478335 | orchestrator | 2025-06-22 19:51:36 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:51:39.530693 | orchestrator | 2025-06-22 19:51:39 | INFO  | Task 9026e2ed-4334-4392-b8b6-d8ee8826b021 is in state STARTED 2025-06-22 19:51:39.530974 | orchestrator | 2025-06-22 19:51:39 | INFO  | Task 6cf8e00e-92c6-4a54-a626-fb6881f79e1f is in state STARTED 2025-06-22 19:51:39.532312 | orchestrator | 2025-06-22 19:51:39 | INFO  | Task 259e6f48-b1f2-4431-8a4b-eb1e83e84d86 is in state STARTED 2025-06-22 19:51:39.532847 | orchestrator | 2025-06-22 19:51:39 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:51:39.532871 | orchestrator | 2025-06-22 19:51:39 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:51:42.581141 | orchestrator | 2025-06-22 19:51:42 | INFO  | Task 9026e2ed-4334-4392-b8b6-d8ee8826b021 is in state STARTED 2025-06-22 19:51:42.582403 | orchestrator | 2025-06-22 19:51:42 | INFO  | Task 6cf8e00e-92c6-4a54-a626-fb6881f79e1f is in state STARTED 2025-06-22 19:51:42.583997 | orchestrator | 2025-06-22 19:51:42 | INFO  | Task 259e6f48-b1f2-4431-8a4b-eb1e83e84d86 is in state STARTED 2025-06-22 19:51:42.585230 | orchestrator | 2025-06-22 19:51:42 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:51:42.585396 | orchestrator | 2025-06-22 19:51:42 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:51:45.626924 | orchestrator | 2025-06-22 19:51:45 | INFO  | Task 9026e2ed-4334-4392-b8b6-d8ee8826b021 is in state SUCCESS 2025-06-22 19:51:45.629353 | orchestrator | 2025-06-22 19:51:45 | INFO  | Task 6cf8e00e-92c6-4a54-a626-fb6881f79e1f is in state STARTED 2025-06-22 19:51:45.632277 | orchestrator | 2025-06-22 19:51:45 | INFO  | Task 259e6f48-b1f2-4431-8a4b-eb1e83e84d86 is in state STARTED 2025-06-22 19:51:45.634176 | orchestrator | 2025-06-22 19:51:45 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:51:45.634201 | orchestrator | 2025-06-22 19:51:45 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:51:48.673897 | orchestrator | 2025-06-22 19:51:48 | INFO  | Task 6cf8e00e-92c6-4a54-a626-fb6881f79e1f is in state STARTED 2025-06-22 19:51:48.673967 | orchestrator | 2025-06-22 19:51:48 | INFO  | Task 259e6f48-b1f2-4431-8a4b-eb1e83e84d86 is in state STARTED 2025-06-22 19:51:48.675325 | orchestrator | 2025-06-22 19:51:48 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:51:48.675356 | orchestrator | 2025-06-22 19:51:48 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:51:51.719328 | orchestrator | 2025-06-22 19:51:51 | INFO  | Task 6cf8e00e-92c6-4a54-a626-fb6881f79e1f is in state STARTED 2025-06-22 19:51:51.719515 | orchestrator | 2025-06-22 19:51:51 | INFO  | Task 259e6f48-b1f2-4431-8a4b-eb1e83e84d86 is in state STARTED 2025-06-22 19:51:51.721337 | orchestrator | 2025-06-22 19:51:51 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:51:51.721405 | orchestrator | 2025-06-22 19:51:51 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:51:54.767155 | orchestrator | 2025-06-22 19:51:54 | INFO  | Task 6cf8e00e-92c6-4a54-a626-fb6881f79e1f is in state STARTED 2025-06-22 19:51:54.769704 | orchestrator | 2025-06-22 19:51:54 | INFO  | Task 259e6f48-b1f2-4431-8a4b-eb1e83e84d86 is in state STARTED 2025-06-22 19:51:54.772085 | orchestrator | 2025-06-22 19:51:54 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:51:54.772357 | orchestrator | 2025-06-22 19:51:54 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:51:57.804841 | orchestrator | 2025-06-22 19:51:57 | INFO  | Task 6cf8e00e-92c6-4a54-a626-fb6881f79e1f is in state STARTED 2025-06-22 19:51:57.807085 | orchestrator | 2025-06-22 19:51:57 | INFO  | Task 259e6f48-b1f2-4431-8a4b-eb1e83e84d86 is in state STARTED 2025-06-22 19:51:57.807714 | orchestrator | 2025-06-22 19:51:57 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:51:57.807881 | orchestrator | 2025-06-22 19:51:57 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:52:00.854341 | orchestrator | 2025-06-22 19:52:00 | INFO  | Task 6cf8e00e-92c6-4a54-a626-fb6881f79e1f is in state STARTED 2025-06-22 19:52:00.855855 | orchestrator | 2025-06-22 19:52:00 | INFO  | Task 259e6f48-b1f2-4431-8a4b-eb1e83e84d86 is in state STARTED 2025-06-22 19:52:00.858365 | orchestrator | 2025-06-22 19:52:00 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:52:00.858398 | orchestrator | 2025-06-22 19:52:00 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:52:03.894653 | orchestrator | 2025-06-22 19:52:03 | INFO  | Task 6cf8e00e-92c6-4a54-a626-fb6881f79e1f is in state STARTED 2025-06-22 19:52:03.896729 | orchestrator | 2025-06-22 19:52:03 | INFO  | Task 259e6f48-b1f2-4431-8a4b-eb1e83e84d86 is in state STARTED 2025-06-22 19:52:03.898217 | orchestrator | 2025-06-22 19:52:03 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:52:03.898245 | orchestrator | 2025-06-22 19:52:03 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:52:06.933914 | orchestrator | 2025-06-22 19:52:06 | INFO  | Task 6cf8e00e-92c6-4a54-a626-fb6881f79e1f is in state STARTED 2025-06-22 19:52:06.935546 | orchestrator | 2025-06-22 19:52:06 | INFO  | Task 259e6f48-b1f2-4431-8a4b-eb1e83e84d86 is in state STARTED 2025-06-22 19:52:06.935910 | orchestrator | 2025-06-22 19:52:06 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:52:06.936003 | orchestrator | 2025-06-22 19:52:06 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:52:09.982202 | orchestrator | 2025-06-22 19:52:09 | INFO  | Task 6cf8e00e-92c6-4a54-a626-fb6881f79e1f is in state STARTED 2025-06-22 19:52:09.982897 | orchestrator | 2025-06-22 19:52:09 | INFO  | Task 259e6f48-b1f2-4431-8a4b-eb1e83e84d86 is in state STARTED 2025-06-22 19:52:09.985998 | orchestrator | 2025-06-22 19:52:09 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:52:09.986084 | orchestrator | 2025-06-22 19:52:09 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:52:13.029404 | orchestrator | 2025-06-22 19:52:13 | INFO  | Task 6cf8e00e-92c6-4a54-a626-fb6881f79e1f is in state STARTED 2025-06-22 19:52:13.031462 | orchestrator | 2025-06-22 19:52:13 | INFO  | Task 259e6f48-b1f2-4431-8a4b-eb1e83e84d86 is in state STARTED 2025-06-22 19:52:13.033599 | orchestrator | 2025-06-22 19:52:13 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:52:13.033847 | orchestrator | 2025-06-22 19:52:13 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:52:16.069883 | orchestrator | 2025-06-22 19:52:16 | INFO  | Task 6cf8e00e-92c6-4a54-a626-fb6881f79e1f is in state STARTED 2025-06-22 19:52:16.071331 | orchestrator | 2025-06-22 19:52:16 | INFO  | Task 259e6f48-b1f2-4431-8a4b-eb1e83e84d86 is in state STARTED 2025-06-22 19:52:16.072978 | orchestrator | 2025-06-22 19:52:16 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:52:16.073255 | orchestrator | 2025-06-22 19:52:16 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:52:19.202787 | orchestrator | 2025-06-22 19:52:19 | INFO  | Task 6cf8e00e-92c6-4a54-a626-fb6881f79e1f is in state STARTED 2025-06-22 19:52:19.204168 | orchestrator | 2025-06-22 19:52:19 | INFO  | Task 259e6f48-b1f2-4431-8a4b-eb1e83e84d86 is in state STARTED 2025-06-22 19:52:19.207421 | orchestrator | 2025-06-22 19:52:19 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:52:19.207470 | orchestrator | 2025-06-22 19:52:19 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:52:22.254603 | orchestrator | 2025-06-22 19:52:22 | INFO  | Task 6cf8e00e-92c6-4a54-a626-fb6881f79e1f is in state STARTED 2025-06-22 19:52:22.257902 | orchestrator | 2025-06-22 19:52:22 | INFO  | Task 259e6f48-b1f2-4431-8a4b-eb1e83e84d86 is in state STARTED 2025-06-22 19:52:22.259261 | orchestrator | 2025-06-22 19:52:22 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:52:22.259313 | orchestrator | 2025-06-22 19:52:22 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:52:25.295317 | orchestrator | 2025-06-22 19:52:25 | INFO  | Task 6cf8e00e-92c6-4a54-a626-fb6881f79e1f is in state STARTED 2025-06-22 19:52:25.297303 | orchestrator | 2025-06-22 19:52:25 | INFO  | Task 259e6f48-b1f2-4431-8a4b-eb1e83e84d86 is in state STARTED 2025-06-22 19:52:25.297339 | orchestrator | 2025-06-22 19:52:25 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:52:25.297345 | orchestrator | 2025-06-22 19:52:25 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:52:28.357997 | orchestrator | 2025-06-22 19:52:28 | INFO  | Task 6cf8e00e-92c6-4a54-a626-fb6881f79e1f is in state STARTED 2025-06-22 19:52:28.360836 | orchestrator | 2025-06-22 19:52:28 | INFO  | Task 259e6f48-b1f2-4431-8a4b-eb1e83e84d86 is in state STARTED 2025-06-22 19:52:28.362123 | orchestrator | 2025-06-22 19:52:28 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:52:28.362169 | orchestrator | 2025-06-22 19:52:28 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:52:31.421617 | orchestrator | 2025-06-22 19:52:31 | INFO  | Task 6cf8e00e-92c6-4a54-a626-fb6881f79e1f is in state STARTED 2025-06-22 19:52:31.423423 | orchestrator | 2025-06-22 19:52:31 | INFO  | Task 259e6f48-b1f2-4431-8a4b-eb1e83e84d86 is in state STARTED 2025-06-22 19:52:31.424726 | orchestrator | 2025-06-22 19:52:31 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:52:31.425484 | orchestrator | 2025-06-22 19:52:31 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:52:34.472879 | orchestrator | 2025-06-22 19:52:34 | INFO  | Task 6cf8e00e-92c6-4a54-a626-fb6881f79e1f is in state STARTED 2025-06-22 19:52:34.474662 | orchestrator | 2025-06-22 19:52:34 | INFO  | Task 259e6f48-b1f2-4431-8a4b-eb1e83e84d86 is in state STARTED 2025-06-22 19:52:34.476447 | orchestrator | 2025-06-22 19:52:34 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:52:34.476912 | orchestrator | 2025-06-22 19:52:34 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:52:37.526737 | orchestrator | 2025-06-22 19:52:37 | INFO  | Task 6cf8e00e-92c6-4a54-a626-fb6881f79e1f is in state STARTED 2025-06-22 19:52:37.536845 | orchestrator | 2025-06-22 19:52:37.536927 | orchestrator | 2025-06-22 19:52:37.536950 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-06-22 19:52:37.536973 | orchestrator | 2025-06-22 19:52:37.536995 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-06-22 19:52:37.537015 | orchestrator | Sunday 22 June 2025 19:50:26 +0000 (0:00:00.286) 0:00:00.286 *********** 2025-06-22 19:52:37.537035 | orchestrator | ok: [testbed-manager] 2025-06-22 19:52:37.537052 | orchestrator | 2025-06-22 19:52:37.537063 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-06-22 19:52:37.537074 | orchestrator | Sunday 22 June 2025 19:50:27 +0000 (0:00:01.071) 0:00:01.357 *********** 2025-06-22 19:52:37.537086 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-06-22 19:52:37.537097 | orchestrator | 2025-06-22 19:52:37.537108 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-06-22 19:52:37.537119 | orchestrator | Sunday 22 June 2025 19:50:28 +0000 (0:00:00.712) 0:00:02.069 *********** 2025-06-22 19:52:37.537134 | orchestrator | changed: [testbed-manager] 2025-06-22 19:52:37.537152 | orchestrator | 2025-06-22 19:52:37.537171 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-06-22 19:52:37.537219 | orchestrator | Sunday 22 June 2025 19:50:29 +0000 (0:00:01.640) 0:00:03.710 *********** 2025-06-22 19:52:37.537231 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-06-22 19:52:37.537243 | orchestrator | ok: [testbed-manager] 2025-06-22 19:52:37.537254 | orchestrator | 2025-06-22 19:52:37.537264 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-06-22 19:52:37.537275 | orchestrator | Sunday 22 June 2025 19:51:39 +0000 (0:01:09.853) 0:01:13.564 *********** 2025-06-22 19:52:37.537319 | orchestrator | changed: [testbed-manager] 2025-06-22 19:52:37.537331 | orchestrator | 2025-06-22 19:52:37.537341 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:52:37.537352 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:52:37.537366 | orchestrator | 2025-06-22 19:52:37.537377 | orchestrator | 2025-06-22 19:52:37.537388 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:52:37.537399 | orchestrator | Sunday 22 June 2025 19:51:43 +0000 (0:00:03.407) 0:01:16.971 *********** 2025-06-22 19:52:37.537411 | orchestrator | =============================================================================== 2025-06-22 19:52:37.537423 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 69.85s 2025-06-22 19:52:37.537436 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 3.41s 2025-06-22 19:52:37.537448 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.64s 2025-06-22 19:52:37.537460 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 1.07s 2025-06-22 19:52:37.537472 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.71s 2025-06-22 19:52:37.537485 | orchestrator | 2025-06-22 19:52:37.537496 | orchestrator | 2025-06-22 19:52:37.537508 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-06-22 19:52:37.537520 | orchestrator | 2025-06-22 19:52:37.537532 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-06-22 19:52:37.537544 | orchestrator | Sunday 22 June 2025 19:49:49 +0000 (0:00:00.308) 0:00:00.308 *********** 2025-06-22 19:52:37.537556 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 19:52:37.537569 | orchestrator | 2025-06-22 19:52:37.537581 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-06-22 19:52:37.537594 | orchestrator | Sunday 22 June 2025 19:49:51 +0000 (0:00:01.584) 0:00:01.893 *********** 2025-06-22 19:52:37.537606 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-22 19:52:37.537618 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-22 19:52:37.537630 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-22 19:52:37.537641 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-22 19:52:37.537653 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-22 19:52:37.537666 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-22 19:52:37.537677 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-22 19:52:37.537758 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-22 19:52:37.537773 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-22 19:52:37.537786 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-22 19:52:37.537809 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-22 19:52:37.537821 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-22 19:52:37.537841 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-22 19:52:37.537852 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-22 19:52:37.537863 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-22 19:52:37.537874 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-06-22 19:52:37.537903 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-22 19:52:37.537914 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-22 19:52:37.537925 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-22 19:52:37.537936 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-06-22 19:52:37.537947 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-06-22 19:52:37.537958 | orchestrator | 2025-06-22 19:52:37.537969 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-06-22 19:52:37.537980 | orchestrator | Sunday 22 June 2025 19:49:57 +0000 (0:00:06.553) 0:00:08.446 *********** 2025-06-22 19:52:37.537991 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 19:52:37.538004 | orchestrator | 2025-06-22 19:52:37.538087 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-06-22 19:52:37.538101 | orchestrator | Sunday 22 June 2025 19:49:59 +0000 (0:00:01.771) 0:00:10.218 *********** 2025-06-22 19:52:37.538117 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 19:52:37.538138 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 19:52:37.538150 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 19:52:37.538162 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 19:52:37.538182 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:52:37.538211 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 19:52:37.538223 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:52:37.538235 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:52:37.538250 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:52:37.538262 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 19:52:37.538273 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 19:52:37.538347 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:52:37.538379 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:52:37.538391 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:52:37.538403 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:52:37.538414 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:52:37.538431 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:52:37.538444 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:52:37.538462 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:52:37.538473 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:52:37.538485 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:52:37.538496 | orchestrator | 2025-06-22 19:52:37.538508 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-06-22 19:52:37.538526 | orchestrator | Sunday 22 June 2025 19:50:05 +0000 (0:00:06.514) 0:00:16.732 *********** 2025-06-22 19:52:37.538537 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-22 19:52:37.538549 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:52:37.538566 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:52:37.538578 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:52:37.538590 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-22 19:52:37.538602 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:52:37.538619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:52:37.538631 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:52:37.538642 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-22 19:52:37.538670 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:52:37.538682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:52:37.538693 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:52:37.538705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-22 19:52:37.538720 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:52:37.538732 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:52:37.538750 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:52:37.538761 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-22 19:52:37.538773 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:52:37.538790 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:52:37.538801 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:52:37.538813 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-22 19:52:37.538824 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:52:37.538840 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:52:37.538851 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:52:37.538863 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-22 19:52:37.538881 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:52:37.538893 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:52:37.538905 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:52:37.538916 | orchestrator | 2025-06-22 19:52:37.538927 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-06-22 19:52:37.538938 | orchestrator | Sunday 22 June 2025 19:50:07 +0000 (0:00:01.632) 0:00:18.365 *********** 2025-06-22 19:52:37.538949 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-22 19:52:37.538976 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:52:37.538997 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:52:37.539018 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:52:37.539036 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-22 19:52:37.539056 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:52:37.539068 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:52:37.539080 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:52:37.539091 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-22 19:52:37.539102 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:52:37.539120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:52:37.539132 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:52:37.539149 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-22 19:52:37.539161 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:52:37.539183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:52:37.539195 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:52:37.539206 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-22 19:52:37.539217 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:52:37.539229 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:52:37.539240 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:52:37.539251 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-22 19:52:37.539271 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:52:37.539344 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:52:37.539369 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:52:37.539386 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-06-22 19:52:37.539398 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:52:37.539410 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:52:37.539421 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:52:37.539432 | orchestrator | 2025-06-22 19:52:37.539444 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-06-22 19:52:37.539455 | orchestrator | Sunday 22 June 2025 19:50:11 +0000 (0:00:03.598) 0:00:21.963 *********** 2025-06-22 19:52:37.539466 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:52:37.539477 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:52:37.539488 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:52:37.539499 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:52:37.539510 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:52:37.539520 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:52:37.539531 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:52:37.539542 | orchestrator | 2025-06-22 19:52:37.539553 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-06-22 19:52:37.539564 | orchestrator | Sunday 22 June 2025 19:50:12 +0000 (0:00:01.295) 0:00:23.259 *********** 2025-06-22 19:52:37.539575 | orchestrator | skipping: [testbed-manager] 2025-06-22 19:52:37.539586 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:52:37.539596 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:52:37.539607 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:52:37.539618 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:52:37.539629 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:52:37.539640 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:52:37.539650 | orchestrator | 2025-06-22 19:52:37.539661 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-06-22 19:52:37.539672 | orchestrator | Sunday 22 June 2025 19:50:13 +0000 (0:00:01.555) 0:00:24.814 *********** 2025-06-22 19:52:37.539744 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 19:52:37.539771 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 19:52:37.539783 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 19:52:37.539800 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 19:52:37.539812 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 19:52:37.539824 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:52:37.539835 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:52:37.539854 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:52:37.539870 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 19:52:37.539880 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:52:37.539895 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:52:37.539905 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:52:37.539916 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 19:52:37.539926 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:52:37.539936 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:52:37.539960 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:52:37.539976 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:52:37.539986 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:52:37.540001 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:52:37.540012 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:52:37.540022 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:52:37.540032 | orchestrator | 2025-06-22 19:52:37.540042 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-06-22 19:52:37.540052 | orchestrator | Sunday 22 June 2025 19:50:20 +0000 (0:00:06.735) 0:00:31.550 *********** 2025-06-22 19:52:37.540062 | orchestrator | [WARNING]: Skipped 2025-06-22 19:52:37.540072 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-06-22 19:52:37.540082 | orchestrator | to this access issue: 2025-06-22 19:52:37.540092 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-06-22 19:52:37.540102 | orchestrator | directory 2025-06-22 19:52:37.540111 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-22 19:52:37.540121 | orchestrator | 2025-06-22 19:52:37.540131 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-06-22 19:52:37.540140 | orchestrator | Sunday 22 June 2025 19:50:22 +0000 (0:00:01.738) 0:00:33.289 *********** 2025-06-22 19:52:37.540156 | orchestrator | [WARNING]: Skipped 2025-06-22 19:52:37.540166 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-06-22 19:52:37.540176 | orchestrator | to this access issue: 2025-06-22 19:52:37.540186 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-06-22 19:52:37.540196 | orchestrator | directory 2025-06-22 19:52:37.540205 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-22 19:52:37.540215 | orchestrator | 2025-06-22 19:52:37.540225 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-06-22 19:52:37.540235 | orchestrator | Sunday 22 June 2025 19:50:23 +0000 (0:00:01.185) 0:00:34.474 *********** 2025-06-22 19:52:37.540244 | orchestrator | [WARNING]: Skipped 2025-06-22 19:52:37.540254 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-06-22 19:52:37.540264 | orchestrator | to this access issue: 2025-06-22 19:52:37.540274 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-06-22 19:52:37.540311 | orchestrator | directory 2025-06-22 19:52:37.540322 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-22 19:52:37.540332 | orchestrator | 2025-06-22 19:52:37.540348 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-06-22 19:52:37.540359 | orchestrator | Sunday 22 June 2025 19:50:25 +0000 (0:00:01.498) 0:00:35.972 *********** 2025-06-22 19:52:37.540368 | orchestrator | [WARNING]: Skipped 2025-06-22 19:52:37.540378 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-06-22 19:52:37.540388 | orchestrator | to this access issue: 2025-06-22 19:52:37.540398 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-06-22 19:52:37.540408 | orchestrator | directory 2025-06-22 19:52:37.540418 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-22 19:52:37.540428 | orchestrator | 2025-06-22 19:52:37.540438 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2025-06-22 19:52:37.540448 | orchestrator | Sunday 22 June 2025 19:50:26 +0000 (0:00:01.231) 0:00:37.204 *********** 2025-06-22 19:52:37.540457 | orchestrator | changed: [testbed-manager] 2025-06-22 19:52:37.540467 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:52:37.540477 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:52:37.540486 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:52:37.540496 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:52:37.540505 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:52:37.540515 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:52:37.540525 | orchestrator | 2025-06-22 19:52:37.540535 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-06-22 19:52:37.540544 | orchestrator | Sunday 22 June 2025 19:50:32 +0000 (0:00:05.831) 0:00:43.035 *********** 2025-06-22 19:52:37.540554 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-22 19:52:37.540564 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-22 19:52:37.540574 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-22 19:52:37.540584 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-22 19:52:37.540594 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-22 19:52:37.540609 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-22 19:52:37.540619 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-06-22 19:52:37.540628 | orchestrator | 2025-06-22 19:52:37.540638 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-06-22 19:52:37.540648 | orchestrator | Sunday 22 June 2025 19:50:34 +0000 (0:00:02.745) 0:00:45.780 *********** 2025-06-22 19:52:37.540664 | orchestrator | changed: [testbed-manager] 2025-06-22 19:52:37.540674 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:52:37.540684 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:52:37.540693 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:52:37.540703 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:52:37.540713 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:52:37.540722 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:52:37.540732 | orchestrator | 2025-06-22 19:52:37.540742 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-06-22 19:52:37.540752 | orchestrator | Sunday 22 June 2025 19:50:37 +0000 (0:00:02.342) 0:00:48.123 *********** 2025-06-22 19:52:37.540763 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 19:52:37.540773 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:52:37.540785 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:52:37.540813 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 19:52:37.540824 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:52:37.540834 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 19:52:37.540854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:52:37.540865 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:52:37.540875 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 19:52:37.540886 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:52:37.540901 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:52:37.540911 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 19:52:37.540922 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:52:37.540937 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:52:37.540948 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 19:52:37.540958 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:52:37.540969 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 19:52:37.540979 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:52:37.540999 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:52:37.541009 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:52:37.541020 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:52:37.541035 | orchestrator | 2025-06-22 19:52:37.541050 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-06-22 19:52:37.541060 | orchestrator | Sunday 22 June 2025 19:50:39 +0000 (0:00:02.322) 0:00:50.445 *********** 2025-06-22 19:52:37.541070 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-22 19:52:37.541080 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-22 19:52:37.541094 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-22 19:52:37.541104 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-22 19:52:37.541114 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-22 19:52:37.541123 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-22 19:52:37.541133 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-06-22 19:52:37.541143 | orchestrator | 2025-06-22 19:52:37.541152 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-06-22 19:52:37.541162 | orchestrator | Sunday 22 June 2025 19:50:41 +0000 (0:00:01.981) 0:00:52.427 *********** 2025-06-22 19:52:37.541172 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-22 19:52:37.541182 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-22 19:52:37.541192 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-22 19:52:37.541201 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-22 19:52:37.541211 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-22 19:52:37.541221 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-22 19:52:37.541231 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-06-22 19:52:37.541240 | orchestrator | 2025-06-22 19:52:37.541250 | orchestrator | TASK [common : Check common containers] **************************************** 2025-06-22 19:52:37.541260 | orchestrator | Sunday 22 June 2025 19:50:43 +0000 (0:00:02.223) 0:00:54.650 *********** 2025-06-22 19:52:37.541270 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 19:52:37.541300 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 19:52:37.541320 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 19:52:37.541340 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 19:52:37.541355 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 19:52:37.541366 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 19:52:37.541377 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:52:37.541387 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:52:37.541397 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-06-22 19:52:37.541414 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:52:37.541431 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:52:37.541446 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:52:37.541457 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:52:37.541467 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:52:37.541479 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:52:37.541489 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:52:37.541500 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:52:37.541524 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:52:37.541534 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:52:37.541545 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:52:37.541559 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:52:37.541569 | orchestrator | 2025-06-22 19:52:37.541579 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-06-22 19:52:37.541589 | orchestrator | Sunday 22 June 2025 19:50:46 +0000 (0:00:03.110) 0:00:57.761 *********** 2025-06-22 19:52:37.541599 | orchestrator | changed: [testbed-manager] 2025-06-22 19:52:37.541609 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:52:37.541619 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:52:37.541629 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:52:37.541639 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:52:37.541649 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:52:37.541659 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:52:37.541668 | orchestrator | 2025-06-22 19:52:37.541678 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-06-22 19:52:37.541688 | orchestrator | Sunday 22 June 2025 19:50:48 +0000 (0:00:01.589) 0:00:59.351 *********** 2025-06-22 19:52:37.541698 | orchestrator | changed: [testbed-manager] 2025-06-22 19:52:37.541708 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:52:37.541717 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:52:37.541727 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:52:37.541736 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:52:37.541746 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:52:37.541755 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:52:37.541765 | orchestrator | 2025-06-22 19:52:37.541775 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-22 19:52:37.541784 | orchestrator | Sunday 22 June 2025 19:50:49 +0000 (0:00:01.480) 0:01:00.832 *********** 2025-06-22 19:52:37.541794 | orchestrator | 2025-06-22 19:52:37.541804 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-22 19:52:37.541814 | orchestrator | Sunday 22 June 2025 19:50:50 +0000 (0:00:00.252) 0:01:01.085 *********** 2025-06-22 19:52:37.541829 | orchestrator | 2025-06-22 19:52:37.541839 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-22 19:52:37.541849 | orchestrator | Sunday 22 June 2025 19:50:50 +0000 (0:00:00.096) 0:01:01.181 *********** 2025-06-22 19:52:37.541859 | orchestrator | 2025-06-22 19:52:37.541868 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-22 19:52:37.541878 | orchestrator | Sunday 22 June 2025 19:50:50 +0000 (0:00:00.125) 0:01:01.307 *********** 2025-06-22 19:52:37.541888 | orchestrator | 2025-06-22 19:52:37.541897 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-22 19:52:37.541907 | orchestrator | Sunday 22 June 2025 19:50:50 +0000 (0:00:00.078) 0:01:01.385 *********** 2025-06-22 19:52:37.541917 | orchestrator | 2025-06-22 19:52:37.541927 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-22 19:52:37.541937 | orchestrator | Sunday 22 June 2025 19:50:50 +0000 (0:00:00.072) 0:01:01.458 *********** 2025-06-22 19:52:37.541947 | orchestrator | 2025-06-22 19:52:37.541956 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-06-22 19:52:37.541966 | orchestrator | Sunday 22 June 2025 19:50:50 +0000 (0:00:00.070) 0:01:01.528 *********** 2025-06-22 19:52:37.541976 | orchestrator | 2025-06-22 19:52:37.541985 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-06-22 19:52:37.541995 | orchestrator | Sunday 22 June 2025 19:50:50 +0000 (0:00:00.084) 0:01:01.613 *********** 2025-06-22 19:52:37.542011 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:52:37.542056 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:52:37.542067 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:52:37.542076 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:52:37.542086 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:52:37.542096 | orchestrator | changed: [testbed-manager] 2025-06-22 19:52:37.542105 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:52:37.542115 | orchestrator | 2025-06-22 19:52:37.542125 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-06-22 19:52:37.542135 | orchestrator | Sunday 22 June 2025 19:51:35 +0000 (0:00:44.910) 0:01:46.524 *********** 2025-06-22 19:52:37.542144 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:52:37.542154 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:52:37.542163 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:52:37.542173 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:52:37.542182 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:52:37.542192 | orchestrator | changed: [testbed-manager] 2025-06-22 19:52:37.542202 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:52:37.542211 | orchestrator | 2025-06-22 19:52:37.542221 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-06-22 19:52:37.542231 | orchestrator | Sunday 22 June 2025 19:52:24 +0000 (0:00:48.401) 0:02:34.925 *********** 2025-06-22 19:52:37.542240 | orchestrator | ok: [testbed-manager] 2025-06-22 19:52:37.542250 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:52:37.542260 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:52:37.542269 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:52:37.542298 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:52:37.542316 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:52:37.542334 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:52:37.542349 | orchestrator | 2025-06-22 19:52:37.542363 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-06-22 19:52:37.542373 | orchestrator | Sunday 22 June 2025 19:52:26 +0000 (0:00:02.397) 0:02:37.322 *********** 2025-06-22 19:52:37.542383 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:52:37.542392 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:52:37.542402 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:52:37.542412 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:52:37.542421 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:52:37.542431 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:52:37.542440 | orchestrator | changed: [testbed-manager] 2025-06-22 19:52:37.542457 | orchestrator | 2025-06-22 19:52:37.542467 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:52:37.542483 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-22 19:52:37.542494 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-22 19:52:37.542504 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-22 19:52:37.542514 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-22 19:52:37.542524 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-22 19:52:37.542534 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-22 19:52:37.542543 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-06-22 19:52:37.542553 | orchestrator | 2025-06-22 19:52:37.542563 | orchestrator | 2025-06-22 19:52:37.542573 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:52:37.542583 | orchestrator | Sunday 22 June 2025 19:52:36 +0000 (0:00:10.176) 0:02:47.499 *********** 2025-06-22 19:52:37.542593 | orchestrator | =============================================================================== 2025-06-22 19:52:37.542602 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 48.40s 2025-06-22 19:52:37.542612 | orchestrator | common : Restart fluentd container ------------------------------------- 44.91s 2025-06-22 19:52:37.542622 | orchestrator | common : Restart cron container ---------------------------------------- 10.18s 2025-06-22 19:52:37.542631 | orchestrator | common : Copying over config.json files for services -------------------- 6.74s 2025-06-22 19:52:37.542641 | orchestrator | common : Ensuring config directories exist ------------------------------ 6.55s 2025-06-22 19:52:37.542651 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 6.51s 2025-06-22 19:52:37.542660 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 5.83s 2025-06-22 19:52:37.542670 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.60s 2025-06-22 19:52:37.542679 | orchestrator | common : Check common containers ---------------------------------------- 3.11s 2025-06-22 19:52:37.542689 | orchestrator | common : Copying over cron logrotate config file ------------------------ 2.75s 2025-06-22 19:52:37.542699 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.40s 2025-06-22 19:52:37.542708 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.34s 2025-06-22 19:52:37.542718 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.32s 2025-06-22 19:52:37.542727 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.22s 2025-06-22 19:52:37.542745 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 1.98s 2025-06-22 19:52:37.542755 | orchestrator | common : include_tasks -------------------------------------------------- 1.77s 2025-06-22 19:52:37.542765 | orchestrator | common : Find custom fluentd input config files ------------------------- 1.74s 2025-06-22 19:52:37.542775 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.63s 2025-06-22 19:52:37.542784 | orchestrator | common : Creating log volume -------------------------------------------- 1.59s 2025-06-22 19:52:37.542794 | orchestrator | common : include_tasks -------------------------------------------------- 1.58s 2025-06-22 19:52:37.542810 | orchestrator | 2025-06-22 19:52:37 | INFO  | Task 259e6f48-b1f2-4431-8a4b-eb1e83e84d86 is in state SUCCESS 2025-06-22 19:52:37.544844 | orchestrator | 2025-06-22 19:52:37 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:52:37.544948 | orchestrator | 2025-06-22 19:52:37 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:52:40.595712 | orchestrator | 2025-06-22 19:52:40 | INFO  | Task 9eb894a1-c29d-4188-926d-efda297da643 is in state STARTED 2025-06-22 19:52:40.596169 | orchestrator | 2025-06-22 19:52:40 | INFO  | Task 953ed4af-13cd-4a7b-9ec4-5d052016a24a is in state STARTED 2025-06-22 19:52:40.597330 | orchestrator | 2025-06-22 19:52:40 | INFO  | Task 6cf8e00e-92c6-4a54-a626-fb6881f79e1f is in state STARTED 2025-06-22 19:52:40.598311 | orchestrator | 2025-06-22 19:52:40 | INFO  | Task 3adbb905-e6c0-42ff-b3cc-589abba5077f is in state STARTED 2025-06-22 19:52:40.599387 | orchestrator | 2025-06-22 19:52:40 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:52:40.601050 | orchestrator | 2025-06-22 19:52:40 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:52:40.602627 | orchestrator | 2025-06-22 19:52:40 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:52:43.639963 | orchestrator | 2025-06-22 19:52:43 | INFO  | Task 9eb894a1-c29d-4188-926d-efda297da643 is in state STARTED 2025-06-22 19:52:43.640431 | orchestrator | 2025-06-22 19:52:43 | INFO  | Task 953ed4af-13cd-4a7b-9ec4-5d052016a24a is in state STARTED 2025-06-22 19:52:43.643686 | orchestrator | 2025-06-22 19:52:43 | INFO  | Task 6cf8e00e-92c6-4a54-a626-fb6881f79e1f is in state STARTED 2025-06-22 19:52:43.647144 | orchestrator | 2025-06-22 19:52:43 | INFO  | Task 3adbb905-e6c0-42ff-b3cc-589abba5077f is in state STARTED 2025-06-22 19:52:43.647200 | orchestrator | 2025-06-22 19:52:43 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:52:43.651321 | orchestrator | 2025-06-22 19:52:43 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:52:43.651383 | orchestrator | 2025-06-22 19:52:43 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:52:46.706573 | orchestrator | 2025-06-22 19:52:46 | INFO  | Task 9eb894a1-c29d-4188-926d-efda297da643 is in state STARTED 2025-06-22 19:52:46.708991 | orchestrator | 2025-06-22 19:52:46 | INFO  | Task 953ed4af-13cd-4a7b-9ec4-5d052016a24a is in state STARTED 2025-06-22 19:52:46.709694 | orchestrator | 2025-06-22 19:52:46 | INFO  | Task 6cf8e00e-92c6-4a54-a626-fb6881f79e1f is in state STARTED 2025-06-22 19:52:46.712604 | orchestrator | 2025-06-22 19:52:46 | INFO  | Task 3adbb905-e6c0-42ff-b3cc-589abba5077f is in state STARTED 2025-06-22 19:52:46.713875 | orchestrator | 2025-06-22 19:52:46 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:52:46.714869 | orchestrator | 2025-06-22 19:52:46 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:52:46.715343 | orchestrator | 2025-06-22 19:52:46 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:52:49.771262 | orchestrator | 2025-06-22 19:52:49 | INFO  | Task 9eb894a1-c29d-4188-926d-efda297da643 is in state STARTED 2025-06-22 19:52:49.771454 | orchestrator | 2025-06-22 19:52:49 | INFO  | Task 953ed4af-13cd-4a7b-9ec4-5d052016a24a is in state STARTED 2025-06-22 19:52:49.771473 | orchestrator | 2025-06-22 19:52:49 | INFO  | Task 6cf8e00e-92c6-4a54-a626-fb6881f79e1f is in state STARTED 2025-06-22 19:52:49.771586 | orchestrator | 2025-06-22 19:52:49 | INFO  | Task 3adbb905-e6c0-42ff-b3cc-589abba5077f is in state STARTED 2025-06-22 19:52:49.773587 | orchestrator | 2025-06-22 19:52:49 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:52:49.779863 | orchestrator | 2025-06-22 19:52:49 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:52:49.779934 | orchestrator | 2025-06-22 19:52:49 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:52:52.820834 | orchestrator | 2025-06-22 19:52:52 | INFO  | Task 9eb894a1-c29d-4188-926d-efda297da643 is in state STARTED 2025-06-22 19:52:52.820941 | orchestrator | 2025-06-22 19:52:52 | INFO  | Task 953ed4af-13cd-4a7b-9ec4-5d052016a24a is in state STARTED 2025-06-22 19:52:52.821391 | orchestrator | 2025-06-22 19:52:52 | INFO  | Task 6cf8e00e-92c6-4a54-a626-fb6881f79e1f is in state STARTED 2025-06-22 19:52:52.824283 | orchestrator | 2025-06-22 19:52:52 | INFO  | Task 3adbb905-e6c0-42ff-b3cc-589abba5077f is in state STARTED 2025-06-22 19:52:52.825057 | orchestrator | 2025-06-22 19:52:52 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:52:52.829144 | orchestrator | 2025-06-22 19:52:52 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:52:52.829200 | orchestrator | 2025-06-22 19:52:52 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:52:55.894800 | orchestrator | 2025-06-22 19:52:55 | INFO  | Task 9eb894a1-c29d-4188-926d-efda297da643 is in state STARTED 2025-06-22 19:52:55.898735 | orchestrator | 2025-06-22 19:52:55 | INFO  | Task 953ed4af-13cd-4a7b-9ec4-5d052016a24a is in state STARTED 2025-06-22 19:52:55.900124 | orchestrator | 2025-06-22 19:52:55 | INFO  | Task 6cf8e00e-92c6-4a54-a626-fb6881f79e1f is in state STARTED 2025-06-22 19:52:55.900446 | orchestrator | 2025-06-22 19:52:55 | INFO  | Task 3adbb905-e6c0-42ff-b3cc-589abba5077f is in state STARTED 2025-06-22 19:52:55.901521 | orchestrator | 2025-06-22 19:52:55 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:52:55.902349 | orchestrator | 2025-06-22 19:52:55 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:52:55.902400 | orchestrator | 2025-06-22 19:52:55 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:52:58.969365 | orchestrator | 2025-06-22 19:52:58 | INFO  | Task 9eb894a1-c29d-4188-926d-efda297da643 is in state STARTED 2025-06-22 19:52:58.970340 | orchestrator | 2025-06-22 19:52:58 | INFO  | Task 953ed4af-13cd-4a7b-9ec4-5d052016a24a is in state STARTED 2025-06-22 19:52:58.971793 | orchestrator | 2025-06-22 19:52:58 | INFO  | Task 6cf8e00e-92c6-4a54-a626-fb6881f79e1f is in state STARTED 2025-06-22 19:52:58.972843 | orchestrator | 2025-06-22 19:52:58 | INFO  | Task 3adbb905-e6c0-42ff-b3cc-589abba5077f is in state STARTED 2025-06-22 19:52:58.975199 | orchestrator | 2025-06-22 19:52:58 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:52:58.977327 | orchestrator | 2025-06-22 19:52:58 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:52:58.977566 | orchestrator | 2025-06-22 19:52:58 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:53:02.022141 | orchestrator | 2025-06-22 19:53:02 | INFO  | Task 9eb894a1-c29d-4188-926d-efda297da643 is in state STARTED 2025-06-22 19:53:02.022243 | orchestrator | 2025-06-22 19:53:02 | INFO  | Task 953ed4af-13cd-4a7b-9ec4-5d052016a24a is in state STARTED 2025-06-22 19:53:02.023086 | orchestrator | 2025-06-22 19:53:02 | INFO  | Task 6cf8e00e-92c6-4a54-a626-fb6881f79e1f is in state STARTED 2025-06-22 19:53:02.023704 | orchestrator | 2025-06-22 19:53:02 | INFO  | Task 3adbb905-e6c0-42ff-b3cc-589abba5077f is in state STARTED 2025-06-22 19:53:02.024359 | orchestrator | 2025-06-22 19:53:02 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:53:02.025371 | orchestrator | 2025-06-22 19:53:02 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:53:02.025395 | orchestrator | 2025-06-22 19:53:02 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:53:05.057997 | orchestrator | 2025-06-22 19:53:05 | INFO  | Task 9eb894a1-c29d-4188-926d-efda297da643 is in state SUCCESS 2025-06-22 19:53:05.058166 | orchestrator | 2025-06-22 19:53:05 | INFO  | Task 953ed4af-13cd-4a7b-9ec4-5d052016a24a is in state STARTED 2025-06-22 19:53:05.058493 | orchestrator | 2025-06-22 19:53:05 | INFO  | Task 6cf8e00e-92c6-4a54-a626-fb6881f79e1f is in state STARTED 2025-06-22 19:53:05.059469 | orchestrator | 2025-06-22 19:53:05 | INFO  | Task 3adbb905-e6c0-42ff-b3cc-589abba5077f is in state STARTED 2025-06-22 19:53:05.060254 | orchestrator | 2025-06-22 19:53:05 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:53:05.060981 | orchestrator | 2025-06-22 19:53:05 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:53:05.061033 | orchestrator | 2025-06-22 19:53:05 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:53:08.095512 | orchestrator | 2025-06-22 19:53:08 | INFO  | Task 953ed4af-13cd-4a7b-9ec4-5d052016a24a is in state STARTED 2025-06-22 19:53:08.097400 | orchestrator | 2025-06-22 19:53:08 | INFO  | Task 6f35f6e4-f8ef-4d0f-a955-ab2373ef9655 is in state STARTED 2025-06-22 19:53:08.099982 | orchestrator | 2025-06-22 19:53:08 | INFO  | Task 6cf8e00e-92c6-4a54-a626-fb6881f79e1f is in state STARTED 2025-06-22 19:53:08.100727 | orchestrator | 2025-06-22 19:53:08 | INFO  | Task 3adbb905-e6c0-42ff-b3cc-589abba5077f is in state STARTED 2025-06-22 19:53:08.101508 | orchestrator | 2025-06-22 19:53:08 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:53:08.104439 | orchestrator | 2025-06-22 19:53:08 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:53:08.104484 | orchestrator | 2025-06-22 19:53:08 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:53:11.142897 | orchestrator | 2025-06-22 19:53:11 | INFO  | Task 953ed4af-13cd-4a7b-9ec4-5d052016a24a is in state STARTED 2025-06-22 19:53:11.144066 | orchestrator | 2025-06-22 19:53:11 | INFO  | Task 6f35f6e4-f8ef-4d0f-a955-ab2373ef9655 is in state STARTED 2025-06-22 19:53:11.146200 | orchestrator | 2025-06-22 19:53:11 | INFO  | Task 6cf8e00e-92c6-4a54-a626-fb6881f79e1f is in state STARTED 2025-06-22 19:53:11.149038 | orchestrator | 2025-06-22 19:53:11 | INFO  | Task 3adbb905-e6c0-42ff-b3cc-589abba5077f is in state STARTED 2025-06-22 19:53:11.151782 | orchestrator | 2025-06-22 19:53:11 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:53:11.153411 | orchestrator | 2025-06-22 19:53:11 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:53:11.153483 | orchestrator | 2025-06-22 19:53:11 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:53:14.189080 | orchestrator | 2025-06-22 19:53:14.189176 | orchestrator | 2025-06-22 19:53:14.189212 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 19:53:14.189226 | orchestrator | 2025-06-22 19:53:14.189239 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 19:53:14.189250 | orchestrator | Sunday 22 June 2025 19:52:45 +0000 (0:00:00.513) 0:00:00.513 *********** 2025-06-22 19:53:14.189261 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:53:14.189273 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:53:14.189285 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:53:14.189392 | orchestrator | 2025-06-22 19:53:14.189411 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 19:53:14.189450 | orchestrator | Sunday 22 June 2025 19:52:46 +0000 (0:00:00.933) 0:00:01.447 *********** 2025-06-22 19:53:14.189464 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-06-22 19:53:14.189476 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-06-22 19:53:14.189486 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-06-22 19:53:14.189498 | orchestrator | 2025-06-22 19:53:14.189509 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-06-22 19:53:14.189521 | orchestrator | 2025-06-22 19:53:14.189533 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-06-22 19:53:14.189545 | orchestrator | Sunday 22 June 2025 19:52:47 +0000 (0:00:01.206) 0:00:02.653 *********** 2025-06-22 19:53:14.189558 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:53:14.189571 | orchestrator | 2025-06-22 19:53:14.189584 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-06-22 19:53:14.189592 | orchestrator | Sunday 22 June 2025 19:52:48 +0000 (0:00:01.511) 0:00:04.165 *********** 2025-06-22 19:53:14.189600 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-06-22 19:53:14.189607 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-06-22 19:53:14.189614 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-06-22 19:53:14.189622 | orchestrator | 2025-06-22 19:53:14.189630 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-06-22 19:53:14.189638 | orchestrator | Sunday 22 June 2025 19:52:50 +0000 (0:00:01.470) 0:00:05.636 *********** 2025-06-22 19:53:14.189646 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-06-22 19:53:14.189654 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-06-22 19:53:14.189663 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-06-22 19:53:14.189671 | orchestrator | 2025-06-22 19:53:14.189679 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-06-22 19:53:14.189687 | orchestrator | Sunday 22 June 2025 19:52:54 +0000 (0:00:03.719) 0:00:09.355 *********** 2025-06-22 19:53:14.189695 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:53:14.189703 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:53:14.189711 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:53:14.189719 | orchestrator | 2025-06-22 19:53:14.189728 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-06-22 19:53:14.189736 | orchestrator | Sunday 22 June 2025 19:52:57 +0000 (0:00:03.261) 0:00:12.617 *********** 2025-06-22 19:53:14.189744 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:53:14.189752 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:53:14.189760 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:53:14.189768 | orchestrator | 2025-06-22 19:53:14.189776 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:53:14.189785 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:53:14.189796 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:53:14.189804 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:53:14.189812 | orchestrator | 2025-06-22 19:53:14.189820 | orchestrator | 2025-06-22 19:53:14.189828 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:53:14.189836 | orchestrator | Sunday 22 June 2025 19:53:04 +0000 (0:00:06.855) 0:00:19.473 *********** 2025-06-22 19:53:14.189844 | orchestrator | =============================================================================== 2025-06-22 19:53:14.189852 | orchestrator | memcached : Restart memcached container --------------------------------- 6.86s 2025-06-22 19:53:14.189867 | orchestrator | memcached : Copying over config.json files for services ----------------- 3.72s 2025-06-22 19:53:14.189875 | orchestrator | memcached : Check memcached container ----------------------------------- 3.26s 2025-06-22 19:53:14.189883 | orchestrator | memcached : include_tasks ----------------------------------------------- 1.51s 2025-06-22 19:53:14.189891 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.47s 2025-06-22 19:53:14.189899 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.21s 2025-06-22 19:53:14.189907 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.93s 2025-06-22 19:53:14.189915 | orchestrator | 2025-06-22 19:53:14.189923 | orchestrator | 2025-06-22 19:53:14.189930 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 19:53:14.189938 | orchestrator | 2025-06-22 19:53:14.189947 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 19:53:14.189955 | orchestrator | Sunday 22 June 2025 19:52:46 +0000 (0:00:00.816) 0:00:00.816 *********** 2025-06-22 19:53:14.189963 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:53:14.189971 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:53:14.189979 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:53:14.189987 | orchestrator | 2025-06-22 19:53:14.189995 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 19:53:14.190096 | orchestrator | Sunday 22 June 2025 19:52:46 +0000 (0:00:00.925) 0:00:01.742 *********** 2025-06-22 19:53:14.190117 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-06-22 19:53:14.190130 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-06-22 19:53:14.190143 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-06-22 19:53:14.190154 | orchestrator | 2025-06-22 19:53:14.190166 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-06-22 19:53:14.190173 | orchestrator | 2025-06-22 19:53:14.190181 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-06-22 19:53:14.190188 | orchestrator | Sunday 22 June 2025 19:52:48 +0000 (0:00:01.426) 0:00:03.169 *********** 2025-06-22 19:53:14.190195 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:53:14.190202 | orchestrator | 2025-06-22 19:53:14.190210 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-06-22 19:53:14.190217 | orchestrator | Sunday 22 June 2025 19:52:49 +0000 (0:00:01.555) 0:00:04.724 *********** 2025-06-22 19:53:14.190227 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-22 19:53:14.190239 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-22 19:53:14.190247 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-22 19:53:14.190264 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-22 19:53:14.190272 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-22 19:53:14.190321 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-22 19:53:14.190333 | orchestrator | 2025-06-22 19:53:14.190341 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-06-22 19:53:14.190349 | orchestrator | Sunday 22 June 2025 19:52:52 +0000 (0:00:02.096) 0:00:06.821 *********** 2025-06-22 19:53:14.190357 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-22 19:53:14.190365 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-22 19:53:14.190372 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-22 19:53:14.190387 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-22 19:53:14.190395 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-22 19:53:14.190414 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-22 19:53:14.190422 | orchestrator | 2025-06-22 19:53:14.190430 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-06-22 19:53:14.190437 | orchestrator | Sunday 22 June 2025 19:52:56 +0000 (0:00:04.042) 0:00:10.863 *********** 2025-06-22 19:53:14.190459 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-22 19:53:14.190467 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-22 19:53:14.190475 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-22 19:53:14.190488 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-22 19:53:14.190496 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-22 19:53:14.190514 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-22 19:53:14.190521 | orchestrator | 2025-06-22 19:53:14.190529 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-06-22 19:53:14.190537 | orchestrator | Sunday 22 June 2025 19:52:59 +0000 (0:00:03.607) 0:00:14.471 *********** 2025-06-22 19:53:14.190544 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-22 19:53:14.190552 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-22 19:53:14.190577 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250530', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-06-22 19:53:14.190589 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-22 19:53:14.190601 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-22 19:53:14.190626 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-06-22 19:53:14.190639 | orchestrator | 2025-06-22 19:53:14.190650 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-06-22 19:53:14.190662 | orchestrator | Sunday 22 June 2025 19:53:01 +0000 (0:00:02.134) 0:00:16.605 *********** 2025-06-22 19:53:14.190674 | orchestrator | 2025-06-22 19:53:14.190687 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-06-22 19:53:14.190699 | orchestrator | Sunday 22 June 2025 19:53:01 +0000 (0:00:00.064) 0:00:16.670 *********** 2025-06-22 19:53:14.190713 | orchestrator | 2025-06-22 19:53:14.190721 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-06-22 19:53:14.190728 | orchestrator | Sunday 22 June 2025 19:53:01 +0000 (0:00:00.061) 0:00:16.731 *********** 2025-06-22 19:53:14.190735 | orchestrator | 2025-06-22 19:53:14.190742 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-06-22 19:53:14.190749 | orchestrator | Sunday 22 June 2025 19:53:02 +0000 (0:00:00.062) 0:00:16.794 *********** 2025-06-22 19:53:14.190757 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:53:14.190764 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:53:14.190771 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:53:14.190785 | orchestrator | 2025-06-22 19:53:14.190792 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-06-22 19:53:14.190799 | orchestrator | Sunday 22 June 2025 19:53:07 +0000 (0:00:05.963) 0:00:22.757 *********** 2025-06-22 19:53:14.190807 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:53:14.190814 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:53:14.190821 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:53:14.190828 | orchestrator | 2025-06-22 19:53:14.190835 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:53:14.190842 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:53:14.190850 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:53:14.190857 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:53:14.190864 | orchestrator | 2025-06-22 19:53:14.190872 | orchestrator | 2025-06-22 19:53:14.190879 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:53:14.190886 | orchestrator | Sunday 22 June 2025 19:53:13 +0000 (0:00:05.166) 0:00:27.924 *********** 2025-06-22 19:53:14.190893 | orchestrator | =============================================================================== 2025-06-22 19:53:14.190900 | orchestrator | redis : Restart redis container ----------------------------------------- 5.96s 2025-06-22 19:53:14.190907 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 5.17s 2025-06-22 19:53:14.190915 | orchestrator | redis : Copying over default config.json files -------------------------- 4.04s 2025-06-22 19:53:14.190922 | orchestrator | redis : Copying over redis config files --------------------------------- 3.61s 2025-06-22 19:53:14.190929 | orchestrator | redis : Check redis containers ------------------------------------------ 2.13s 2025-06-22 19:53:14.190936 | orchestrator | redis : Ensuring config directories exist ------------------------------- 2.10s 2025-06-22 19:53:14.190943 | orchestrator | redis : include_tasks --------------------------------------------------- 1.56s 2025-06-22 19:53:14.190950 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.43s 2025-06-22 19:53:14.190957 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.93s 2025-06-22 19:53:14.190964 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.19s 2025-06-22 19:53:14.190971 | orchestrator | 2025-06-22 19:53:14 | INFO  | Task 953ed4af-13cd-4a7b-9ec4-5d052016a24a is in state SUCCESS 2025-06-22 19:53:14.190979 | orchestrator | 2025-06-22 19:53:14 | INFO  | Task 6f35f6e4-f8ef-4d0f-a955-ab2373ef9655 is in state STARTED 2025-06-22 19:53:14.190986 | orchestrator | 2025-06-22 19:53:14 | INFO  | Task 6cf8e00e-92c6-4a54-a626-fb6881f79e1f is in state STARTED 2025-06-22 19:53:14.191006 | orchestrator | 2025-06-22 19:53:14 | INFO  | Task 3adbb905-e6c0-42ff-b3cc-589abba5077f is in state STARTED 2025-06-22 19:53:14.191018 | orchestrator | 2025-06-22 19:53:14 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:53:14.192138 | orchestrator | 2025-06-22 19:53:14 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:53:14.192404 | orchestrator | 2025-06-22 19:53:14 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:53:17.245185 | orchestrator | 2025-06-22 19:53:17 | INFO  | Task 6f35f6e4-f8ef-4d0f-a955-ab2373ef9655 is in state STARTED 2025-06-22 19:53:17.249656 | orchestrator | 2025-06-22 19:53:17 | INFO  | Task 6cf8e00e-92c6-4a54-a626-fb6881f79e1f is in state STARTED 2025-06-22 19:53:17.250905 | orchestrator | 2025-06-22 19:53:17 | INFO  | Task 3adbb905-e6c0-42ff-b3cc-589abba5077f is in state STARTED 2025-06-22 19:53:17.253047 | orchestrator | 2025-06-22 19:53:17 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:53:17.254914 | orchestrator | 2025-06-22 19:53:17 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:53:17.254957 | orchestrator | 2025-06-22 19:53:17 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:53:20.301794 | orchestrator | 2025-06-22 19:53:20 | INFO  | Task 6f35f6e4-f8ef-4d0f-a955-ab2373ef9655 is in state STARTED 2025-06-22 19:53:20.303966 | orchestrator | 2025-06-22 19:53:20 | INFO  | Task 6cf8e00e-92c6-4a54-a626-fb6881f79e1f is in state STARTED 2025-06-22 19:53:20.306900 | orchestrator | 2025-06-22 19:53:20 | INFO  | Task 3adbb905-e6c0-42ff-b3cc-589abba5077f is in state STARTED 2025-06-22 19:53:20.308507 | orchestrator | 2025-06-22 19:53:20 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:53:20.313001 | orchestrator | 2025-06-22 19:53:20 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:53:20.313056 | orchestrator | 2025-06-22 19:53:20 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:53:23.352785 | orchestrator | 2025-06-22 19:53:23 | INFO  | Task 6f35f6e4-f8ef-4d0f-a955-ab2373ef9655 is in state STARTED 2025-06-22 19:53:23.354823 | orchestrator | 2025-06-22 19:53:23 | INFO  | Task 6cf8e00e-92c6-4a54-a626-fb6881f79e1f is in state STARTED 2025-06-22 19:53:23.357291 | orchestrator | 2025-06-22 19:53:23 | INFO  | Task 3adbb905-e6c0-42ff-b3cc-589abba5077f is in state STARTED 2025-06-22 19:53:23.359635 | orchestrator | 2025-06-22 19:53:23 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:53:23.361487 | orchestrator | 2025-06-22 19:53:23 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:53:23.361543 | orchestrator | 2025-06-22 19:53:23 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:53:26.402911 | orchestrator | 2025-06-22 19:53:26 | INFO  | Task 6f35f6e4-f8ef-4d0f-a955-ab2373ef9655 is in state STARTED 2025-06-22 19:53:26.403128 | orchestrator | 2025-06-22 19:53:26 | INFO  | Task 6cf8e00e-92c6-4a54-a626-fb6881f79e1f is in state STARTED 2025-06-22 19:53:26.407758 | orchestrator | 2025-06-22 19:53:26 | INFO  | Task 3adbb905-e6c0-42ff-b3cc-589abba5077f is in state STARTED 2025-06-22 19:53:26.407788 | orchestrator | 2025-06-22 19:53:26 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:53:26.408447 | orchestrator | 2025-06-22 19:53:26 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:53:26.408472 | orchestrator | 2025-06-22 19:53:26 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:53:29.452084 | orchestrator | 2025-06-22 19:53:29 | INFO  | Task 6f35f6e4-f8ef-4d0f-a955-ab2373ef9655 is in state STARTED 2025-06-22 19:53:29.452395 | orchestrator | 2025-06-22 19:53:29 | INFO  | Task 6cf8e00e-92c6-4a54-a626-fb6881f79e1f is in state STARTED 2025-06-22 19:53:29.453026 | orchestrator | 2025-06-22 19:53:29 | INFO  | Task 3adbb905-e6c0-42ff-b3cc-589abba5077f is in state STARTED 2025-06-22 19:53:29.460350 | orchestrator | 2025-06-22 19:53:29 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:53:29.460932 | orchestrator | 2025-06-22 19:53:29 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:53:29.460964 | orchestrator | 2025-06-22 19:53:29 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:53:32.489624 | orchestrator | 2025-06-22 19:53:32 | INFO  | Task 6f35f6e4-f8ef-4d0f-a955-ab2373ef9655 is in state STARTED 2025-06-22 19:53:32.491519 | orchestrator | 2025-06-22 19:53:32 | INFO  | Task 6cf8e00e-92c6-4a54-a626-fb6881f79e1f is in state STARTED 2025-06-22 19:53:32.491584 | orchestrator | 2025-06-22 19:53:32 | INFO  | Task 3adbb905-e6c0-42ff-b3cc-589abba5077f is in state STARTED 2025-06-22 19:53:32.491597 | orchestrator | 2025-06-22 19:53:32 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:53:32.492096 | orchestrator | 2025-06-22 19:53:32 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:53:32.492119 | orchestrator | 2025-06-22 19:53:32 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:53:35.538924 | orchestrator | 2025-06-22 19:53:35 | INFO  | Task 6f35f6e4-f8ef-4d0f-a955-ab2373ef9655 is in state STARTED 2025-06-22 19:53:35.540467 | orchestrator | 2025-06-22 19:53:35 | INFO  | Task 6cf8e00e-92c6-4a54-a626-fb6881f79e1f is in state STARTED 2025-06-22 19:53:35.542809 | orchestrator | 2025-06-22 19:53:35 | INFO  | Task 3adbb905-e6c0-42ff-b3cc-589abba5077f is in state STARTED 2025-06-22 19:53:35.546635 | orchestrator | 2025-06-22 19:53:35 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:53:35.547647 | orchestrator | 2025-06-22 19:53:35 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:53:35.547927 | orchestrator | 2025-06-22 19:53:35 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:53:38.581371 | orchestrator | 2025-06-22 19:53:38 | INFO  | Task 6f35f6e4-f8ef-4d0f-a955-ab2373ef9655 is in state STARTED 2025-06-22 19:53:38.581463 | orchestrator | 2025-06-22 19:53:38 | INFO  | Task 6cf8e00e-92c6-4a54-a626-fb6881f79e1f is in state STARTED 2025-06-22 19:53:38.582524 | orchestrator | 2025-06-22 19:53:38 | INFO  | Task 3adbb905-e6c0-42ff-b3cc-589abba5077f is in state STARTED 2025-06-22 19:53:38.582552 | orchestrator | 2025-06-22 19:53:38 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:53:38.583380 | orchestrator | 2025-06-22 19:53:38 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:53:38.586578 | orchestrator | 2025-06-22 19:53:38 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:53:41.632822 | orchestrator | 2025-06-22 19:53:41 | INFO  | Task 6f35f6e4-f8ef-4d0f-a955-ab2373ef9655 is in state STARTED 2025-06-22 19:53:41.633633 | orchestrator | 2025-06-22 19:53:41 | INFO  | Task 6cf8e00e-92c6-4a54-a626-fb6881f79e1f is in state STARTED 2025-06-22 19:53:41.634971 | orchestrator | 2025-06-22 19:53:41 | INFO  | Task 3adbb905-e6c0-42ff-b3cc-589abba5077f is in state STARTED 2025-06-22 19:53:41.636225 | orchestrator | 2025-06-22 19:53:41 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:53:41.637660 | orchestrator | 2025-06-22 19:53:41 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:53:41.637987 | orchestrator | 2025-06-22 19:53:41 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:53:44.666688 | orchestrator | 2025-06-22 19:53:44 | INFO  | Task 6f35f6e4-f8ef-4d0f-a955-ab2373ef9655 is in state STARTED 2025-06-22 19:53:44.667069 | orchestrator | 2025-06-22 19:53:44 | INFO  | Task 6cf8e00e-92c6-4a54-a626-fb6881f79e1f is in state STARTED 2025-06-22 19:53:44.668404 | orchestrator | 2025-06-22 19:53:44 | INFO  | Task 3adbb905-e6c0-42ff-b3cc-589abba5077f is in state STARTED 2025-06-22 19:53:44.670269 | orchestrator | 2025-06-22 19:53:44 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:53:44.671531 | orchestrator | 2025-06-22 19:53:44 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:53:44.671555 | orchestrator | 2025-06-22 19:53:44 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:53:47.711223 | orchestrator | 2025-06-22 19:53:47 | INFO  | Task 6f35f6e4-f8ef-4d0f-a955-ab2373ef9655 is in state STARTED 2025-06-22 19:53:47.713923 | orchestrator | 2025-06-22 19:53:47 | INFO  | Task 6cf8e00e-92c6-4a54-a626-fb6881f79e1f is in state STARTED 2025-06-22 19:53:47.713971 | orchestrator | 2025-06-22 19:53:47 | INFO  | Task 3adbb905-e6c0-42ff-b3cc-589abba5077f is in state STARTED 2025-06-22 19:53:47.713984 | orchestrator | 2025-06-22 19:53:47 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:53:47.718460 | orchestrator | 2025-06-22 19:53:47 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:53:47.718498 | orchestrator | 2025-06-22 19:53:47 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:53:50.756737 | orchestrator | 2025-06-22 19:53:50 | INFO  | Task 6f35f6e4-f8ef-4d0f-a955-ab2373ef9655 is in state STARTED 2025-06-22 19:53:50.759529 | orchestrator | 2025-06-22 19:53:50 | INFO  | Task 6cf8e00e-92c6-4a54-a626-fb6881f79e1f is in state STARTED 2025-06-22 19:53:50.762246 | orchestrator | 2025-06-22 19:53:50 | INFO  | Task 3adbb905-e6c0-42ff-b3cc-589abba5077f is in state STARTED 2025-06-22 19:53:50.764514 | orchestrator | 2025-06-22 19:53:50 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:53:50.766642 | orchestrator | 2025-06-22 19:53:50 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:53:50.767021 | orchestrator | 2025-06-22 19:53:50 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:53:53.802088 | orchestrator | 2025-06-22 19:53:53 | INFO  | Task 6f35f6e4-f8ef-4d0f-a955-ab2373ef9655 is in state STARTED 2025-06-22 19:53:53.805550 | orchestrator | 2025-06-22 19:53:53 | INFO  | Task 6cf8e00e-92c6-4a54-a626-fb6881f79e1f is in state STARTED 2025-06-22 19:53:53.806225 | orchestrator | 2025-06-22 19:53:53 | INFO  | Task 3adbb905-e6c0-42ff-b3cc-589abba5077f is in state STARTED 2025-06-22 19:53:53.807001 | orchestrator | 2025-06-22 19:53:53 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:53:53.811201 | orchestrator | 2025-06-22 19:53:53 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:53:53.811226 | orchestrator | 2025-06-22 19:53:53 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:53:56.845148 | orchestrator | 2025-06-22 19:53:56 | INFO  | Task 6f35f6e4-f8ef-4d0f-a955-ab2373ef9655 is in state STARTED 2025-06-22 19:53:56.847160 | orchestrator | 2025-06-22 19:53:56 | INFO  | Task 6cf8e00e-92c6-4a54-a626-fb6881f79e1f is in state STARTED 2025-06-22 19:53:56.849178 | orchestrator | 2025-06-22 19:53:56 | INFO  | Task 3adbb905-e6c0-42ff-b3cc-589abba5077f is in state STARTED 2025-06-22 19:53:56.853370 | orchestrator | 2025-06-22 19:53:56 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:53:56.856482 | orchestrator | 2025-06-22 19:53:56 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:53:56.856547 | orchestrator | 2025-06-22 19:53:56 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:53:59.883293 | orchestrator | 2025-06-22 19:53:59 | INFO  | Task 6f35f6e4-f8ef-4d0f-a955-ab2373ef9655 is in state STARTED 2025-06-22 19:53:59.884684 | orchestrator | 2025-06-22 19:53:59 | INFO  | Task 6cf8e00e-92c6-4a54-a626-fb6881f79e1f is in state STARTED 2025-06-22 19:53:59.886845 | orchestrator | 2025-06-22 19:53:59 | INFO  | Task 3adbb905-e6c0-42ff-b3cc-589abba5077f is in state STARTED 2025-06-22 19:53:59.888588 | orchestrator | 2025-06-22 19:53:59 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:53:59.891429 | orchestrator | 2025-06-22 19:53:59 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:53:59.891462 | orchestrator | 2025-06-22 19:53:59 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:54:02.918802 | orchestrator | 2025-06-22 19:54:02 | INFO  | Task 6f35f6e4-f8ef-4d0f-a955-ab2373ef9655 is in state STARTED 2025-06-22 19:54:02.919133 | orchestrator | 2025-06-22 19:54:02 | INFO  | Task 6cf8e00e-92c6-4a54-a626-fb6881f79e1f is in state STARTED 2025-06-22 19:54:02.919949 | orchestrator | 2025-06-22 19:54:02 | INFO  | Task 5d805b9a-719c-44ed-bde2-6ef304fdf04a is in state STARTED 2025-06-22 19:54:02.921048 | orchestrator | 2025-06-22 19:54:02 | INFO  | Task 3adbb905-e6c0-42ff-b3cc-589abba5077f is in state SUCCESS 2025-06-22 19:54:02.924367 | orchestrator | 2025-06-22 19:54:02.924410 | orchestrator | 2025-06-22 19:54:02.924423 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 19:54:02.924436 | orchestrator | 2025-06-22 19:54:02.924447 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 19:54:02.924459 | orchestrator | Sunday 22 June 2025 19:52:45 +0000 (0:00:00.506) 0:00:00.506 *********** 2025-06-22 19:54:02.924470 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:54:02.924482 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:54:02.924494 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:54:02.924505 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:54:02.924516 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:54:02.924527 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:54:02.924538 | orchestrator | 2025-06-22 19:54:02.924550 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 19:54:02.924561 | orchestrator | Sunday 22 June 2025 19:52:46 +0000 (0:00:01.560) 0:00:02.067 *********** 2025-06-22 19:54:02.924572 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-22 19:54:02.924584 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-22 19:54:02.924595 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-22 19:54:02.924607 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-22 19:54:02.924618 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-22 19:54:02.924629 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-06-22 19:54:02.924640 | orchestrator | 2025-06-22 19:54:02.924652 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-06-22 19:54:02.924663 | orchestrator | 2025-06-22 19:54:02.924674 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-06-22 19:54:02.924686 | orchestrator | Sunday 22 June 2025 19:52:48 +0000 (0:00:01.967) 0:00:04.035 *********** 2025-06-22 19:54:02.924698 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 19:54:02.924711 | orchestrator | 2025-06-22 19:54:02.924722 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-06-22 19:54:02.924734 | orchestrator | Sunday 22 June 2025 19:52:51 +0000 (0:00:02.904) 0:00:06.939 *********** 2025-06-22 19:54:02.924745 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-06-22 19:54:02.924757 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-06-22 19:54:02.924769 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-06-22 19:54:02.924787 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-06-22 19:54:02.924799 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-06-22 19:54:02.924811 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-06-22 19:54:02.924823 | orchestrator | 2025-06-22 19:54:02.924834 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-06-22 19:54:02.924863 | orchestrator | Sunday 22 June 2025 19:52:54 +0000 (0:00:02.445) 0:00:09.385 *********** 2025-06-22 19:54:02.924875 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-06-22 19:54:02.924886 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-06-22 19:54:02.924897 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-06-22 19:54:02.924909 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-06-22 19:54:02.924920 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-06-22 19:54:02.924931 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-06-22 19:54:02.924942 | orchestrator | 2025-06-22 19:54:02.924954 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-06-22 19:54:02.924965 | orchestrator | Sunday 22 June 2025 19:52:57 +0000 (0:00:03.046) 0:00:12.431 *********** 2025-06-22 19:54:02.924976 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-06-22 19:54:02.924987 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:54:02.924999 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-06-22 19:54:02.925010 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:54:02.925021 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-06-22 19:54:02.925033 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:54:02.925044 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-06-22 19:54:02.925055 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:54:02.925066 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-06-22 19:54:02.925078 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:54:02.925089 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-06-22 19:54:02.925101 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:54:02.925112 | orchestrator | 2025-06-22 19:54:02.925123 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-06-22 19:54:02.925134 | orchestrator | Sunday 22 June 2025 19:52:59 +0000 (0:00:02.211) 0:00:14.642 *********** 2025-06-22 19:54:02.925145 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:54:02.925157 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:54:02.925168 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:54:02.925179 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:54:02.925190 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:54:02.925201 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:54:02.925212 | orchestrator | 2025-06-22 19:54:02.925223 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-06-22 19:54:02.925235 | orchestrator | Sunday 22 June 2025 19:53:00 +0000 (0:00:01.062) 0:00:15.705 *********** 2025-06-22 19:54:02.925264 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-22 19:54:02.925281 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-22 19:54:02.925305 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-22 19:54:02.925343 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-22 19:54:02.925355 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-22 19:54:02.925373 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-22 19:54:02.925385 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-22 19:54:02.925396 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-22 19:54:02.925420 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-22 19:54:02.925432 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-22 19:54:02.925444 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-22 19:54:02.925502 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-22 19:54:02.925515 | orchestrator | 2025-06-22 19:54:02.925527 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-06-22 19:54:02.925538 | orchestrator | Sunday 22 June 2025 19:53:02 +0000 (0:00:02.177) 0:00:17.883 *********** 2025-06-22 19:54:02.925550 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-22 19:54:02.925568 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-22 19:54:02.925580 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-22 19:54:02.925591 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-22 19:54:02.925609 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-22 19:54:02.925628 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-22 19:54:02.925646 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-22 19:54:02.925661 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-22 19:54:02.925673 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-22 19:54:02.925684 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-22 19:54:02.925701 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-22 19:54:02.925713 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-22 19:54:02.925731 | orchestrator | 2025-06-22 19:54:02.925742 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-06-22 19:54:02.925753 | orchestrator | Sunday 22 June 2025 19:53:06 +0000 (0:00:03.602) 0:00:21.485 *********** 2025-06-22 19:54:02.925764 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:54:02.925775 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:54:02.925786 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:54:02.925797 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:54:02.925807 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:54:02.925818 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:54:02.925829 | orchestrator | 2025-06-22 19:54:02.925839 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-06-22 19:54:02.925850 | orchestrator | Sunday 22 June 2025 19:53:07 +0000 (0:00:01.269) 0:00:22.755 *********** 2025-06-22 19:54:02.925866 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-22 19:54:02.925878 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-22 19:54:02.925890 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-22 19:54:02.925907 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-22 19:54:02.925924 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-22 19:54:02.925936 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-22 19:54:02.925951 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-22 19:54:02.925963 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-22 19:54:02.925975 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-22 19:54:02.925993 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-06-22 19:54:02.926077 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-22 19:54:02.926094 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-06-22 19:54:02.926106 | orchestrator | 2025-06-22 19:54:02.926117 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-22 19:54:02.926133 | orchestrator | Sunday 22 June 2025 19:53:11 +0000 (0:00:03.417) 0:00:26.172 *********** 2025-06-22 19:54:02.926144 | orchestrator | 2025-06-22 19:54:02.926156 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-22 19:54:02.926252 | orchestrator | Sunday 22 June 2025 19:53:11 +0000 (0:00:00.232) 0:00:26.405 *********** 2025-06-22 19:54:02.926278 | orchestrator | 2025-06-22 19:54:02.926300 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-22 19:54:02.926340 | orchestrator | Sunday 22 June 2025 19:53:11 +0000 (0:00:00.148) 0:00:26.553 *********** 2025-06-22 19:54:02.926360 | orchestrator | 2025-06-22 19:54:02.926378 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-22 19:54:02.926419 | orchestrator | Sunday 22 June 2025 19:53:11 +0000 (0:00:00.292) 0:00:26.845 *********** 2025-06-22 19:54:02.926438 | orchestrator | 2025-06-22 19:54:02.926458 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-22 19:54:02.926470 | orchestrator | Sunday 22 June 2025 19:53:12 +0000 (0:00:00.312) 0:00:27.158 *********** 2025-06-22 19:54:02.926481 | orchestrator | 2025-06-22 19:54:02.926492 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-06-22 19:54:02.926503 | orchestrator | Sunday 22 June 2025 19:53:12 +0000 (0:00:00.209) 0:00:27.367 *********** 2025-06-22 19:54:02.926514 | orchestrator | 2025-06-22 19:54:02.926525 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-06-22 19:54:02.926535 | orchestrator | Sunday 22 June 2025 19:53:12 +0000 (0:00:00.310) 0:00:27.678 *********** 2025-06-22 19:54:02.926546 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:54:02.926557 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:54:02.926568 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:54:02.926589 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:54:02.926604 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:54:02.926623 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:54:02.926640 | orchestrator | 2025-06-22 19:54:02.926651 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-06-22 19:54:02.926663 | orchestrator | Sunday 22 June 2025 19:53:24 +0000 (0:00:11.636) 0:00:39.314 *********** 2025-06-22 19:54:02.926679 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:54:02.926698 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:54:02.926712 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:54:02.926723 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:54:02.926739 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:54:02.926757 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:54:02.926775 | orchestrator | 2025-06-22 19:54:02.926794 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-06-22 19:54:02.926813 | orchestrator | Sunday 22 June 2025 19:53:26 +0000 (0:00:02.794) 0:00:42.109 *********** 2025-06-22 19:54:02.926833 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:54:02.926852 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:54:02.926871 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:54:02.926891 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:54:02.926910 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:54:02.926921 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:54:02.926932 | orchestrator | 2025-06-22 19:54:02.926943 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-06-22 19:54:02.926954 | orchestrator | Sunday 22 June 2025 19:53:37 +0000 (0:00:10.752) 0:00:52.862 *********** 2025-06-22 19:54:02.926976 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-06-22 19:54:02.926988 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-06-22 19:54:02.926999 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-06-22 19:54:02.927010 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-06-22 19:54:02.927021 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-06-22 19:54:02.927032 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-06-22 19:54:02.927042 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-06-22 19:54:02.927053 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-06-22 19:54:02.927064 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-06-22 19:54:02.927075 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-06-22 19:54:02.927085 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-06-22 19:54:02.927096 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-06-22 19:54:02.927111 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-22 19:54:02.927130 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-22 19:54:02.927149 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-22 19:54:02.927169 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-22 19:54:02.927199 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-22 19:54:02.927211 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-06-22 19:54:02.927221 | orchestrator | 2025-06-22 19:54:02.927232 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-06-22 19:54:02.927243 | orchestrator | Sunday 22 June 2025 19:53:45 +0000 (0:00:08.037) 0:01:00.899 *********** 2025-06-22 19:54:02.927254 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-06-22 19:54:02.927265 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:54:02.927276 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-06-22 19:54:02.927287 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:54:02.927298 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-06-22 19:54:02.927331 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:54:02.927348 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-06-22 19:54:02.927360 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-06-22 19:54:02.927380 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-06-22 19:54:02.927400 | orchestrator | 2025-06-22 19:54:02.927418 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-06-22 19:54:02.927434 | orchestrator | Sunday 22 June 2025 19:53:48 +0000 (0:00:02.851) 0:01:03.751 *********** 2025-06-22 19:54:02.927446 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-06-22 19:54:02.927456 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:54:02.927467 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-06-22 19:54:02.927478 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:54:02.927489 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-06-22 19:54:02.927500 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:54:02.927511 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-06-22 19:54:02.927522 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-06-22 19:54:02.927533 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-06-22 19:54:02.927544 | orchestrator | 2025-06-22 19:54:02.927555 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-06-22 19:54:02.927566 | orchestrator | Sunday 22 June 2025 19:53:52 +0000 (0:00:03.715) 0:01:07.467 *********** 2025-06-22 19:54:02.927577 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:54:02.927588 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:54:02.927599 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:54:02.927610 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:54:02.927621 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:54:02.927632 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:54:02.927642 | orchestrator | 2025-06-22 19:54:02.927654 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:54:02.927666 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-22 19:54:02.927685 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-22 19:54:02.927697 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-22 19:54:02.927708 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-22 19:54:02.927719 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-22 19:54:02.927731 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-22 19:54:02.927750 | orchestrator | 2025-06-22 19:54:02.927761 | orchestrator | 2025-06-22 19:54:02.927772 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:54:02.927783 | orchestrator | Sunday 22 June 2025 19:54:00 +0000 (0:00:08.605) 0:01:16.072 *********** 2025-06-22 19:54:02.927794 | orchestrator | =============================================================================== 2025-06-22 19:54:02.927805 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 19.36s 2025-06-22 19:54:02.927816 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 11.64s 2025-06-22 19:54:02.927826 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 8.04s 2025-06-22 19:54:02.927837 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.72s 2025-06-22 19:54:02.927848 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.60s 2025-06-22 19:54:02.927859 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 3.42s 2025-06-22 19:54:02.927869 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 3.05s 2025-06-22 19:54:02.927880 | orchestrator | openvswitch : include_tasks --------------------------------------------- 2.90s 2025-06-22 19:54:02.927891 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.85s 2025-06-22 19:54:02.927901 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 2.79s 2025-06-22 19:54:02.927912 | orchestrator | module-load : Load modules ---------------------------------------------- 2.45s 2025-06-22 19:54:02.927927 | orchestrator | module-load : Drop module persistence ----------------------------------- 2.21s 2025-06-22 19:54:02.927939 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.18s 2025-06-22 19:54:02.927949 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.97s 2025-06-22 19:54:02.927960 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.56s 2025-06-22 19:54:02.927971 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.51s 2025-06-22 19:54:02.927982 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.27s 2025-06-22 19:54:02.927993 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 1.06s 2025-06-22 19:54:02.928004 | orchestrator | 2025-06-22 19:54:02 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:54:02.928015 | orchestrator | 2025-06-22 19:54:02 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:54:02.928026 | orchestrator | 2025-06-22 19:54:02 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:54:05.958596 | orchestrator | 2025-06-22 19:54:05 | INFO  | Task 6f35f6e4-f8ef-4d0f-a955-ab2373ef9655 is in state STARTED 2025-06-22 19:54:05.959001 | orchestrator | 2025-06-22 19:54:05 | INFO  | Task 6cf8e00e-92c6-4a54-a626-fb6881f79e1f is in state STARTED 2025-06-22 19:54:05.961102 | orchestrator | 2025-06-22 19:54:05 | INFO  | Task 5d805b9a-719c-44ed-bde2-6ef304fdf04a is in state STARTED 2025-06-22 19:54:05.961885 | orchestrator | 2025-06-22 19:54:05 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:54:05.962684 | orchestrator | 2025-06-22 19:54:05 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:54:05.962787 | orchestrator | 2025-06-22 19:54:05 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:54:08.991961 | orchestrator | 2025-06-22 19:54:08 | INFO  | Task 6f35f6e4-f8ef-4d0f-a955-ab2373ef9655 is in state STARTED 2025-06-22 19:54:08.992611 | orchestrator | 2025-06-22 19:54:08 | INFO  | Task 6cf8e00e-92c6-4a54-a626-fb6881f79e1f is in state STARTED 2025-06-22 19:54:08.993468 | orchestrator | 2025-06-22 19:54:08 | INFO  | Task 5d805b9a-719c-44ed-bde2-6ef304fdf04a is in state STARTED 2025-06-22 19:54:08.994489 | orchestrator | 2025-06-22 19:54:08 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:54:08.995049 | orchestrator | 2025-06-22 19:54:08 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:54:08.995384 | orchestrator | 2025-06-22 19:54:08 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:54:12.036938 | orchestrator | 2025-06-22 19:54:12 | INFO  | Task 6f35f6e4-f8ef-4d0f-a955-ab2373ef9655 is in state STARTED 2025-06-22 19:54:12.037462 | orchestrator | 2025-06-22 19:54:12 | INFO  | Task 6cf8e00e-92c6-4a54-a626-fb6881f79e1f is in state STARTED 2025-06-22 19:54:12.038826 | orchestrator | 2025-06-22 19:54:12 | INFO  | Task 5d805b9a-719c-44ed-bde2-6ef304fdf04a is in state STARTED 2025-06-22 19:54:12.041226 | orchestrator | 2025-06-22 19:54:12 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:54:12.042853 | orchestrator | 2025-06-22 19:54:12 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:54:12.042920 | orchestrator | 2025-06-22 19:54:12 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:54:15.100084 | orchestrator | 2025-06-22 19:54:15 | INFO  | Task 6f35f6e4-f8ef-4d0f-a955-ab2373ef9655 is in state STARTED 2025-06-22 19:54:15.100683 | orchestrator | 2025-06-22 19:54:15 | INFO  | Task 6cf8e00e-92c6-4a54-a626-fb6881f79e1f is in state STARTED 2025-06-22 19:54:15.101901 | orchestrator | 2025-06-22 19:54:15 | INFO  | Task 5d805b9a-719c-44ed-bde2-6ef304fdf04a is in state STARTED 2025-06-22 19:54:15.102975 | orchestrator | 2025-06-22 19:54:15 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:54:15.105136 | orchestrator | 2025-06-22 19:54:15 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:54:15.105261 | orchestrator | 2025-06-22 19:54:15 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:54:18.130979 | orchestrator | 2025-06-22 19:54:18 | INFO  | Task 6f35f6e4-f8ef-4d0f-a955-ab2373ef9655 is in state STARTED 2025-06-22 19:54:18.132219 | orchestrator | 2025-06-22 19:54:18 | INFO  | Task 6cf8e00e-92c6-4a54-a626-fb6881f79e1f is in state SUCCESS 2025-06-22 19:54:18.134246 | orchestrator | 2025-06-22 19:54:18.134283 | orchestrator | 2025-06-22 19:54:18.134294 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2025-06-22 19:54:18.134305 | orchestrator | 2025-06-22 19:54:18.134367 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2025-06-22 19:54:18.134385 | orchestrator | Sunday 22 June 2025 19:49:49 +0000 (0:00:00.251) 0:00:00.251 *********** 2025-06-22 19:54:18.134453 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:54:18.134464 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:54:18.134474 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:54:18.134483 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:54:18.134493 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:54:18.134502 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:54:18.134512 | orchestrator | 2025-06-22 19:54:18.134521 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2025-06-22 19:54:18.134532 | orchestrator | Sunday 22 June 2025 19:49:51 +0000 (0:00:01.269) 0:00:01.520 *********** 2025-06-22 19:54:18.134541 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:54:18.134552 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:54:18.134561 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:54:18.134571 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:54:18.134580 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:54:18.134590 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:54:18.134599 | orchestrator | 2025-06-22 19:54:18.134609 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2025-06-22 19:54:18.134635 | orchestrator | Sunday 22 June 2025 19:49:51 +0000 (0:00:00.811) 0:00:02.332 *********** 2025-06-22 19:54:18.134645 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:54:18.134654 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:54:18.134664 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:54:18.134673 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:54:18.134683 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:54:18.134701 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:54:18.134711 | orchestrator | 2025-06-22 19:54:18.134720 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2025-06-22 19:54:18.134730 | orchestrator | Sunday 22 June 2025 19:49:52 +0000 (0:00:00.936) 0:00:03.268 *********** 2025-06-22 19:54:18.134739 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:54:18.134749 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:54:18.134758 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:54:18.134767 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:54:18.134777 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:54:18.134786 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:54:18.134796 | orchestrator | 2025-06-22 19:54:18.134805 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2025-06-22 19:54:18.134815 | orchestrator | Sunday 22 June 2025 19:49:55 +0000 (0:00:02.823) 0:00:06.092 *********** 2025-06-22 19:54:18.134824 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:54:18.134834 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:54:18.134844 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:54:18.134855 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:54:18.134865 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:54:18.134875 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:54:18.134886 | orchestrator | 2025-06-22 19:54:18.134897 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2025-06-22 19:54:18.134908 | orchestrator | Sunday 22 June 2025 19:49:57 +0000 (0:00:01.886) 0:00:07.979 *********** 2025-06-22 19:54:18.134946 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:54:18.134957 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:54:18.134968 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:54:18.134979 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:54:18.134989 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:54:18.135000 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:54:18.135010 | orchestrator | 2025-06-22 19:54:18.135021 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2025-06-22 19:54:18.135032 | orchestrator | Sunday 22 June 2025 19:49:58 +0000 (0:00:01.303) 0:00:09.282 *********** 2025-06-22 19:54:18.135042 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:54:18.135053 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:54:18.135064 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:54:18.135075 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:54:18.135085 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:54:18.135096 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:54:18.135106 | orchestrator | 2025-06-22 19:54:18.135118 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2025-06-22 19:54:18.135129 | orchestrator | Sunday 22 June 2025 19:49:59 +0000 (0:00:00.908) 0:00:10.191 *********** 2025-06-22 19:54:18.135139 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:54:18.135150 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:54:18.135161 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:54:18.135171 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:54:18.135182 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:54:18.135193 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:54:18.135203 | orchestrator | 2025-06-22 19:54:18.135212 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2025-06-22 19:54:18.135222 | orchestrator | Sunday 22 June 2025 19:50:00 +0000 (0:00:00.880) 0:00:11.072 *********** 2025-06-22 19:54:18.135232 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-22 19:54:18.135248 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-22 19:54:18.135257 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:54:18.135267 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-22 19:54:18.135277 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-22 19:54:18.135286 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:54:18.135296 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-22 19:54:18.135305 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-22 19:54:18.135339 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:54:18.135350 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-22 19:54:18.135372 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-22 19:54:18.135383 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:54:18.135393 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-22 19:54:18.135402 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-22 19:54:18.135416 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:54:18.135427 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-22 19:54:18.135436 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-22 19:54:18.135445 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:54:18.135455 | orchestrator | 2025-06-22 19:54:18.135498 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2025-06-22 19:54:18.135508 | orchestrator | Sunday 22 June 2025 19:50:02 +0000 (0:00:01.534) 0:00:12.606 *********** 2025-06-22 19:54:18.135518 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:54:18.135528 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:54:18.135537 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:54:18.135547 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:54:18.135556 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:54:18.135566 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:54:18.135575 | orchestrator | 2025-06-22 19:54:18.135585 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2025-06-22 19:54:18.135595 | orchestrator | Sunday 22 June 2025 19:50:04 +0000 (0:00:02.234) 0:00:14.841 *********** 2025-06-22 19:54:18.135605 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:54:18.135614 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:54:18.135624 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:54:18.135633 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:54:18.135643 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:54:18.135652 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:54:18.135662 | orchestrator | 2025-06-22 19:54:18.135671 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2025-06-22 19:54:18.135681 | orchestrator | Sunday 22 June 2025 19:50:05 +0000 (0:00:00.878) 0:00:15.719 *********** 2025-06-22 19:54:18.135690 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:54:18.135724 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:54:18.135735 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:54:18.135745 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:54:18.135754 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:54:18.135764 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:54:18.135773 | orchestrator | 2025-06-22 19:54:18.135783 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2025-06-22 19:54:18.135792 | orchestrator | Sunday 22 June 2025 19:50:12 +0000 (0:00:07.178) 0:00:22.898 *********** 2025-06-22 19:54:18.135802 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:54:18.135812 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:54:18.135821 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:54:18.135838 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:54:18.135848 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:54:18.135858 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:54:18.135867 | orchestrator | 2025-06-22 19:54:18.135877 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2025-06-22 19:54:18.135886 | orchestrator | Sunday 22 June 2025 19:50:13 +0000 (0:00:01.165) 0:00:24.063 *********** 2025-06-22 19:54:18.135896 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:54:18.135905 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:54:18.135915 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:54:18.135925 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:54:18.135934 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:54:18.135943 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:54:18.135953 | orchestrator | 2025-06-22 19:54:18.135963 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2025-06-22 19:54:18.135973 | orchestrator | Sunday 22 June 2025 19:50:16 +0000 (0:00:02.620) 0:00:26.684 *********** 2025-06-22 19:54:18.135983 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:54:18.135992 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:54:18.136001 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:54:18.136011 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:54:18.136021 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:54:18.136030 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:54:18.136064 | orchestrator | 2025-06-22 19:54:18.136076 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2025-06-22 19:54:18.136086 | orchestrator | Sunday 22 June 2025 19:50:17 +0000 (0:00:01.448) 0:00:28.132 *********** 2025-06-22 19:54:18.136095 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2025-06-22 19:54:18.136105 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2025-06-22 19:54:18.136115 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:54:18.136124 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2025-06-22 19:54:18.136134 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2025-06-22 19:54:18.136143 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:54:18.136152 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2025-06-22 19:54:18.136162 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2025-06-22 19:54:18.136171 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:54:18.136181 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2025-06-22 19:54:18.136190 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2025-06-22 19:54:18.136200 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:54:18.136209 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2025-06-22 19:54:18.136225 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2025-06-22 19:54:18.136235 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:54:18.136244 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2025-06-22 19:54:18.136254 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2025-06-22 19:54:18.136264 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:54:18.136273 | orchestrator | 2025-06-22 19:54:18.136283 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2025-06-22 19:54:18.136299 | orchestrator | Sunday 22 June 2025 19:50:18 +0000 (0:00:00.969) 0:00:29.102 *********** 2025-06-22 19:54:18.136342 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:54:18.136355 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:54:18.136364 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:54:18.136374 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:54:18.136384 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:54:18.136398 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:54:18.136408 | orchestrator | 2025-06-22 19:54:18.136417 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2025-06-22 19:54:18.136439 | orchestrator | 2025-06-22 19:54:18.136449 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2025-06-22 19:54:18.136458 | orchestrator | Sunday 22 June 2025 19:50:20 +0000 (0:00:01.393) 0:00:30.496 *********** 2025-06-22 19:54:18.136468 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:54:18.136478 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:54:18.136487 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:54:18.136497 | orchestrator | 2025-06-22 19:54:18.136506 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2025-06-22 19:54:18.136516 | orchestrator | Sunday 22 June 2025 19:50:21 +0000 (0:00:01.448) 0:00:31.944 *********** 2025-06-22 19:54:18.136525 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:54:18.136535 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:54:18.136544 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:54:18.136554 | orchestrator | 2025-06-22 19:54:18.136564 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2025-06-22 19:54:18.136573 | orchestrator | Sunday 22 June 2025 19:50:22 +0000 (0:00:01.432) 0:00:33.377 *********** 2025-06-22 19:54:18.136583 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:54:18.136592 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:54:18.136602 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:54:18.136611 | orchestrator | 2025-06-22 19:54:18.136621 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2025-06-22 19:54:18.136630 | orchestrator | Sunday 22 June 2025 19:50:24 +0000 (0:00:01.361) 0:00:34.740 *********** 2025-06-22 19:54:18.136640 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:54:18.136649 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:54:18.136659 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:54:18.136668 | orchestrator | 2025-06-22 19:54:18.136678 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2025-06-22 19:54:18.136687 | orchestrator | Sunday 22 June 2025 19:50:25 +0000 (0:00:01.247) 0:00:35.987 *********** 2025-06-22 19:54:18.136697 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:54:18.136730 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:54:18.136741 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:54:18.136751 | orchestrator | 2025-06-22 19:54:18.136761 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2025-06-22 19:54:18.136770 | orchestrator | Sunday 22 June 2025 19:50:25 +0000 (0:00:00.368) 0:00:36.356 *********** 2025-06-22 19:54:18.136780 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:54:18.136789 | orchestrator | 2025-06-22 19:54:18.136799 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2025-06-22 19:54:18.136809 | orchestrator | Sunday 22 June 2025 19:50:26 +0000 (0:00:00.867) 0:00:37.223 *********** 2025-06-22 19:54:18.136818 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:54:18.136828 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:54:18.136837 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:54:18.136847 | orchestrator | 2025-06-22 19:54:18.136857 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2025-06-22 19:54:18.136866 | orchestrator | Sunday 22 June 2025 19:50:29 +0000 (0:00:02.649) 0:00:39.873 *********** 2025-06-22 19:54:18.136876 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:54:18.136886 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:54:18.136895 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:54:18.136905 | orchestrator | 2025-06-22 19:54:18.136914 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2025-06-22 19:54:18.136924 | orchestrator | Sunday 22 June 2025 19:50:30 +0000 (0:00:00.986) 0:00:40.859 *********** 2025-06-22 19:54:18.136933 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:54:18.136943 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:54:18.136952 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:54:18.136962 | orchestrator | 2025-06-22 19:54:18.136971 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2025-06-22 19:54:18.136981 | orchestrator | Sunday 22 June 2025 19:50:31 +0000 (0:00:01.058) 0:00:41.918 *********** 2025-06-22 19:54:18.136996 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:54:18.137005 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:54:18.137015 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:54:18.137024 | orchestrator | 2025-06-22 19:54:18.137034 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2025-06-22 19:54:18.137043 | orchestrator | Sunday 22 June 2025 19:50:33 +0000 (0:00:01.865) 0:00:43.783 *********** 2025-06-22 19:54:18.137053 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:54:18.137062 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:54:18.137072 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:54:18.137082 | orchestrator | 2025-06-22 19:54:18.137091 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2025-06-22 19:54:18.137101 | orchestrator | Sunday 22 June 2025 19:50:33 +0000 (0:00:00.382) 0:00:44.165 *********** 2025-06-22 19:54:18.137110 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:54:18.137120 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:54:18.137129 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:54:18.137138 | orchestrator | 2025-06-22 19:54:18.137148 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2025-06-22 19:54:18.137158 | orchestrator | Sunday 22 June 2025 19:50:34 +0000 (0:00:00.331) 0:00:44.496 *********** 2025-06-22 19:54:18.137167 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:54:18.137177 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:54:18.137187 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:54:18.137196 | orchestrator | 2025-06-22 19:54:18.137206 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2025-06-22 19:54:18.137216 | orchestrator | Sunday 22 June 2025 19:50:35 +0000 (0:00:01.282) 0:00:45.779 *********** 2025-06-22 19:54:18.137232 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-06-22 19:54:18.137242 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-06-22 19:54:18.137257 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-06-22 19:54:18.137267 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-06-22 19:54:18.137277 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-06-22 19:54:18.137287 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-06-22 19:54:18.137296 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-06-22 19:54:18.137306 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-06-22 19:54:18.137328 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-06-22 19:54:18.137338 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-06-22 19:54:18.137348 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-06-22 19:54:18.137357 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-06-22 19:54:18.137367 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-06-22 19:54:18.137384 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:54:18.137394 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:54:18.137403 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:54:18.137413 | orchestrator | 2025-06-22 19:54:18.137422 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2025-06-22 19:54:18.137432 | orchestrator | Sunday 22 June 2025 19:51:31 +0000 (0:00:55.754) 0:01:41.533 *********** 2025-06-22 19:54:18.137442 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:54:18.137451 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:54:18.137460 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:54:18.137470 | orchestrator | 2025-06-22 19:54:18.137479 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2025-06-22 19:54:18.137489 | orchestrator | Sunday 22 June 2025 19:51:31 +0000 (0:00:00.297) 0:01:41.831 *********** 2025-06-22 19:54:18.137498 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:54:18.137508 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:54:18.137517 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:54:18.137527 | orchestrator | 2025-06-22 19:54:18.137536 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2025-06-22 19:54:18.137546 | orchestrator | Sunday 22 June 2025 19:51:32 +0000 (0:00:01.101) 0:01:42.933 *********** 2025-06-22 19:54:18.137555 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:54:18.137565 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:54:18.137574 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:54:18.137583 | orchestrator | 2025-06-22 19:54:18.137593 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2025-06-22 19:54:18.137602 | orchestrator | Sunday 22 June 2025 19:51:33 +0000 (0:00:01.165) 0:01:44.098 *********** 2025-06-22 19:54:18.137612 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:54:18.137621 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:54:18.137631 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:54:18.137640 | orchestrator | 2025-06-22 19:54:18.137650 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2025-06-22 19:54:18.137659 | orchestrator | Sunday 22 June 2025 19:51:49 +0000 (0:00:15.656) 0:01:59.754 *********** 2025-06-22 19:54:18.137669 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:54:18.137678 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:54:18.137688 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:54:18.137697 | orchestrator | 2025-06-22 19:54:18.137707 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2025-06-22 19:54:18.137717 | orchestrator | Sunday 22 June 2025 19:51:50 +0000 (0:00:00.761) 0:02:00.516 *********** 2025-06-22 19:54:18.137726 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:54:18.137735 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:54:18.137745 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:54:18.137754 | orchestrator | 2025-06-22 19:54:18.137764 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2025-06-22 19:54:18.137773 | orchestrator | Sunday 22 June 2025 19:51:50 +0000 (0:00:00.580) 0:02:01.097 *********** 2025-06-22 19:54:18.137783 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:54:18.137792 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:54:18.137802 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:54:18.137811 | orchestrator | 2025-06-22 19:54:18.137821 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2025-06-22 19:54:18.137830 | orchestrator | Sunday 22 June 2025 19:51:51 +0000 (0:00:00.694) 0:02:01.791 *********** 2025-06-22 19:54:18.137840 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:54:18.137849 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:54:18.137859 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:54:18.137868 | orchestrator | 2025-06-22 19:54:18.137883 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2025-06-22 19:54:18.137893 | orchestrator | Sunday 22 June 2025 19:51:52 +0000 (0:00:00.902) 0:02:02.694 *********** 2025-06-22 19:54:18.137902 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:54:18.137912 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:54:18.137928 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:54:18.137938 | orchestrator | 2025-06-22 19:54:18.137952 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2025-06-22 19:54:18.137961 | orchestrator | Sunday 22 June 2025 19:51:52 +0000 (0:00:00.323) 0:02:03.018 *********** 2025-06-22 19:54:18.137971 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:54:18.137981 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:54:18.137991 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:54:18.138000 | orchestrator | 2025-06-22 19:54:18.138010 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2025-06-22 19:54:18.138062 | orchestrator | Sunday 22 June 2025 19:51:53 +0000 (0:00:00.657) 0:02:03.675 *********** 2025-06-22 19:54:18.138073 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:54:18.138083 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:54:18.138093 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:54:18.138102 | orchestrator | 2025-06-22 19:54:18.138112 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2025-06-22 19:54:18.138121 | orchestrator | Sunday 22 June 2025 19:51:53 +0000 (0:00:00.602) 0:02:04.277 *********** 2025-06-22 19:54:18.138131 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:54:18.138140 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:54:18.138150 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:54:18.138159 | orchestrator | 2025-06-22 19:54:18.138169 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2025-06-22 19:54:18.138179 | orchestrator | Sunday 22 June 2025 19:51:54 +0000 (0:00:01.018) 0:02:05.296 *********** 2025-06-22 19:54:18.138188 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:54:18.138198 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:54:18.138207 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:54:18.138217 | orchestrator | 2025-06-22 19:54:18.138226 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2025-06-22 19:54:18.138236 | orchestrator | Sunday 22 June 2025 19:51:55 +0000 (0:00:00.738) 0:02:06.035 *********** 2025-06-22 19:54:18.138246 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:54:18.138255 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:54:18.138265 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:54:18.138275 | orchestrator | 2025-06-22 19:54:18.138284 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2025-06-22 19:54:18.138294 | orchestrator | Sunday 22 June 2025 19:51:55 +0000 (0:00:00.285) 0:02:06.320 *********** 2025-06-22 19:54:18.138303 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:54:18.138335 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:54:18.138345 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:54:18.138355 | orchestrator | 2025-06-22 19:54:18.138364 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2025-06-22 19:54:18.138374 | orchestrator | Sunday 22 June 2025 19:51:56 +0000 (0:00:00.302) 0:02:06.622 *********** 2025-06-22 19:54:18.138383 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:54:18.138400 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:54:18.138410 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:54:18.138420 | orchestrator | 2025-06-22 19:54:18.138429 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2025-06-22 19:54:18.138439 | orchestrator | Sunday 22 June 2025 19:51:57 +0000 (0:00:00.910) 0:02:07.532 *********** 2025-06-22 19:54:18.138449 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:54:18.138458 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:54:18.138468 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:54:18.138477 | orchestrator | 2025-06-22 19:54:18.138487 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2025-06-22 19:54:18.138497 | orchestrator | Sunday 22 June 2025 19:51:57 +0000 (0:00:00.757) 0:02:08.290 *********** 2025-06-22 19:54:18.138507 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-06-22 19:54:18.138523 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-06-22 19:54:18.138532 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-06-22 19:54:18.138542 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-06-22 19:54:18.138552 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-06-22 19:54:18.138561 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-06-22 19:54:18.138571 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-06-22 19:54:18.138580 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-06-22 19:54:18.138590 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-06-22 19:54:18.138600 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2025-06-22 19:54:18.138609 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-06-22 19:54:18.138619 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-06-22 19:54:18.138628 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2025-06-22 19:54:18.138638 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-06-22 19:54:18.138647 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-06-22 19:54:18.138668 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-06-22 19:54:18.138678 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-06-22 19:54:18.138688 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-06-22 19:54:18.138698 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-06-22 19:54:18.138707 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-06-22 19:54:18.138717 | orchestrator | 2025-06-22 19:54:18.138726 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2025-06-22 19:54:18.138736 | orchestrator | 2025-06-22 19:54:18.139263 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2025-06-22 19:54:18.139277 | orchestrator | Sunday 22 June 2025 19:52:00 +0000 (0:00:03.068) 0:02:11.358 *********** 2025-06-22 19:54:18.139287 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:54:18.139296 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:54:18.139306 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:54:18.139333 | orchestrator | 2025-06-22 19:54:18.139344 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2025-06-22 19:54:18.139353 | orchestrator | Sunday 22 June 2025 19:52:01 +0000 (0:00:00.453) 0:02:11.811 *********** 2025-06-22 19:54:18.139363 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:54:18.139372 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:54:18.139382 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:54:18.139392 | orchestrator | 2025-06-22 19:54:18.139401 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2025-06-22 19:54:18.139411 | orchestrator | Sunday 22 June 2025 19:52:01 +0000 (0:00:00.602) 0:02:12.414 *********** 2025-06-22 19:54:18.139420 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:54:18.139430 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:54:18.139440 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:54:18.139449 | orchestrator | 2025-06-22 19:54:18.139459 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2025-06-22 19:54:18.139468 | orchestrator | Sunday 22 June 2025 19:52:02 +0000 (0:00:00.322) 0:02:12.736 *********** 2025-06-22 19:54:18.139489 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 19:54:18.139498 | orchestrator | 2025-06-22 19:54:18.139508 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2025-06-22 19:54:18.139518 | orchestrator | Sunday 22 June 2025 19:52:02 +0000 (0:00:00.596) 0:02:13.333 *********** 2025-06-22 19:54:18.139527 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:54:18.139537 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:54:18.139547 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:54:18.139556 | orchestrator | 2025-06-22 19:54:18.139566 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2025-06-22 19:54:18.139575 | orchestrator | Sunday 22 June 2025 19:52:03 +0000 (0:00:00.309) 0:02:13.642 *********** 2025-06-22 19:54:18.139585 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:54:18.139595 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:54:18.139604 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:54:18.139614 | orchestrator | 2025-06-22 19:54:18.139623 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2025-06-22 19:54:18.139633 | orchestrator | Sunday 22 June 2025 19:52:03 +0000 (0:00:00.315) 0:02:13.957 *********** 2025-06-22 19:54:18.139642 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:54:18.139652 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:54:18.139661 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:54:18.139671 | orchestrator | 2025-06-22 19:54:18.139680 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2025-06-22 19:54:18.139690 | orchestrator | Sunday 22 June 2025 19:52:03 +0000 (0:00:00.271) 0:02:14.229 *********** 2025-06-22 19:54:18.139700 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:54:18.139709 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:54:18.139719 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:54:18.139728 | orchestrator | 2025-06-22 19:54:18.139738 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2025-06-22 19:54:18.139747 | orchestrator | Sunday 22 June 2025 19:52:05 +0000 (0:00:01.341) 0:02:15.571 *********** 2025-06-22 19:54:18.139757 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:54:18.139766 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:54:18.139776 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:54:18.139785 | orchestrator | 2025-06-22 19:54:18.139795 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-06-22 19:54:18.139805 | orchestrator | 2025-06-22 19:54:18.139814 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-06-22 19:54:18.139824 | orchestrator | Sunday 22 June 2025 19:52:13 +0000 (0:00:08.682) 0:02:24.253 *********** 2025-06-22 19:54:18.139834 | orchestrator | ok: [testbed-manager] 2025-06-22 19:54:18.139843 | orchestrator | 2025-06-22 19:54:18.139853 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-06-22 19:54:18.139863 | orchestrator | Sunday 22 June 2025 19:52:14 +0000 (0:00:00.738) 0:02:24.991 *********** 2025-06-22 19:54:18.139872 | orchestrator | changed: [testbed-manager] 2025-06-22 19:54:18.139882 | orchestrator | 2025-06-22 19:54:18.139891 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-06-22 19:54:18.139901 | orchestrator | Sunday 22 June 2025 19:52:14 +0000 (0:00:00.384) 0:02:25.375 *********** 2025-06-22 19:54:18.139910 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-06-22 19:54:18.139920 | orchestrator | 2025-06-22 19:54:18.139934 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-06-22 19:54:18.139944 | orchestrator | Sunday 22 June 2025 19:52:15 +0000 (0:00:00.786) 0:02:26.162 *********** 2025-06-22 19:54:18.139953 | orchestrator | changed: [testbed-manager] 2025-06-22 19:54:18.139963 | orchestrator | 2025-06-22 19:54:18.139973 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-06-22 19:54:18.139983 | orchestrator | Sunday 22 June 2025 19:52:16 +0000 (0:00:00.762) 0:02:26.924 *********** 2025-06-22 19:54:18.140005 | orchestrator | changed: [testbed-manager] 2025-06-22 19:54:18.140015 | orchestrator | 2025-06-22 19:54:18.140025 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-06-22 19:54:18.140035 | orchestrator | Sunday 22 June 2025 19:52:17 +0000 (0:00:00.604) 0:02:27.528 *********** 2025-06-22 19:54:18.140044 | orchestrator | changed: [testbed-manager -> localhost] 2025-06-22 19:54:18.140054 | orchestrator | 2025-06-22 19:54:18.140064 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-06-22 19:54:18.140073 | orchestrator | Sunday 22 June 2025 19:52:18 +0000 (0:00:01.632) 0:02:29.161 *********** 2025-06-22 19:54:18.140083 | orchestrator | changed: [testbed-manager -> localhost] 2025-06-22 19:54:18.140092 | orchestrator | 2025-06-22 19:54:18.140102 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-06-22 19:54:18.140112 | orchestrator | Sunday 22 June 2025 19:52:19 +0000 (0:00:00.834) 0:02:29.995 *********** 2025-06-22 19:54:18.140121 | orchestrator | changed: [testbed-manager] 2025-06-22 19:54:18.140131 | orchestrator | 2025-06-22 19:54:18.140140 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-06-22 19:54:18.140150 | orchestrator | Sunday 22 June 2025 19:52:19 +0000 (0:00:00.438) 0:02:30.434 *********** 2025-06-22 19:54:18.140159 | orchestrator | changed: [testbed-manager] 2025-06-22 19:54:18.140169 | orchestrator | 2025-06-22 19:54:18.140179 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2025-06-22 19:54:18.140188 | orchestrator | 2025-06-22 19:54:18.140198 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2025-06-22 19:54:18.140207 | orchestrator | Sunday 22 June 2025 19:52:20 +0000 (0:00:00.442) 0:02:30.876 *********** 2025-06-22 19:54:18.140217 | orchestrator | ok: [testbed-manager] 2025-06-22 19:54:18.140226 | orchestrator | 2025-06-22 19:54:18.140236 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2025-06-22 19:54:18.140246 | orchestrator | Sunday 22 June 2025 19:52:20 +0000 (0:00:00.149) 0:02:31.025 *********** 2025-06-22 19:54:18.140255 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2025-06-22 19:54:18.140265 | orchestrator | 2025-06-22 19:54:18.140275 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2025-06-22 19:54:18.140284 | orchestrator | Sunday 22 June 2025 19:52:21 +0000 (0:00:00.438) 0:02:31.464 *********** 2025-06-22 19:54:18.140294 | orchestrator | ok: [testbed-manager] 2025-06-22 19:54:18.140304 | orchestrator | 2025-06-22 19:54:18.140350 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2025-06-22 19:54:18.140361 | orchestrator | Sunday 22 June 2025 19:52:21 +0000 (0:00:00.858) 0:02:32.323 *********** 2025-06-22 19:54:18.140371 | orchestrator | ok: [testbed-manager] 2025-06-22 19:54:18.140380 | orchestrator | 2025-06-22 19:54:18.140390 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2025-06-22 19:54:18.140400 | orchestrator | Sunday 22 June 2025 19:52:23 +0000 (0:00:01.849) 0:02:34.172 *********** 2025-06-22 19:54:18.140410 | orchestrator | changed: [testbed-manager] 2025-06-22 19:54:18.140419 | orchestrator | 2025-06-22 19:54:18.140429 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2025-06-22 19:54:18.140439 | orchestrator | Sunday 22 June 2025 19:52:24 +0000 (0:00:00.845) 0:02:35.018 *********** 2025-06-22 19:54:18.140448 | orchestrator | ok: [testbed-manager] 2025-06-22 19:54:18.140458 | orchestrator | 2025-06-22 19:54:18.140467 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2025-06-22 19:54:18.140477 | orchestrator | Sunday 22 June 2025 19:52:25 +0000 (0:00:00.494) 0:02:35.513 *********** 2025-06-22 19:54:18.140487 | orchestrator | changed: [testbed-manager] 2025-06-22 19:54:18.140496 | orchestrator | 2025-06-22 19:54:18.140506 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2025-06-22 19:54:18.140516 | orchestrator | Sunday 22 June 2025 19:52:32 +0000 (0:00:07.815) 0:02:43.328 *********** 2025-06-22 19:54:18.140526 | orchestrator | changed: [testbed-manager] 2025-06-22 19:54:18.140542 | orchestrator | 2025-06-22 19:54:18.140552 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2025-06-22 19:54:18.140561 | orchestrator | Sunday 22 June 2025 19:52:46 +0000 (0:00:13.649) 0:02:56.978 *********** 2025-06-22 19:54:18.140571 | orchestrator | ok: [testbed-manager] 2025-06-22 19:54:18.140580 | orchestrator | 2025-06-22 19:54:18.140590 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2025-06-22 19:54:18.140600 | orchestrator | 2025-06-22 19:54:18.140610 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2025-06-22 19:54:18.140619 | orchestrator | Sunday 22 June 2025 19:52:47 +0000 (0:00:00.610) 0:02:57.588 *********** 2025-06-22 19:54:18.140629 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:54:18.140639 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:54:18.140648 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:54:18.140658 | orchestrator | 2025-06-22 19:54:18.140668 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2025-06-22 19:54:18.140678 | orchestrator | Sunday 22 June 2025 19:52:47 +0000 (0:00:00.656) 0:02:58.245 *********** 2025-06-22 19:54:18.140687 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:54:18.140697 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:54:18.140707 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:54:18.140716 | orchestrator | 2025-06-22 19:54:18.140726 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2025-06-22 19:54:18.140736 | orchestrator | Sunday 22 June 2025 19:52:48 +0000 (0:00:00.471) 0:02:58.717 *********** 2025-06-22 19:54:18.140745 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:54:18.140755 | orchestrator | 2025-06-22 19:54:18.140769 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2025-06-22 19:54:18.140779 | orchestrator | Sunday 22 June 2025 19:52:48 +0000 (0:00:00.566) 0:02:59.284 *********** 2025-06-22 19:54:18.140788 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-06-22 19:54:18.140798 | orchestrator | 2025-06-22 19:54:18.140808 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2025-06-22 19:54:18.140817 | orchestrator | Sunday 22 June 2025 19:52:50 +0000 (0:00:01.750) 0:03:01.034 *********** 2025-06-22 19:54:18.140833 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-22 19:54:18.140843 | orchestrator | 2025-06-22 19:54:18.140853 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2025-06-22 19:54:18.140863 | orchestrator | Sunday 22 June 2025 19:52:52 +0000 (0:00:02.137) 0:03:03.171 *********** 2025-06-22 19:54:18.140873 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:54:18.140882 | orchestrator | 2025-06-22 19:54:18.140892 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2025-06-22 19:54:18.140902 | orchestrator | Sunday 22 June 2025 19:52:52 +0000 (0:00:00.232) 0:03:03.404 *********** 2025-06-22 19:54:18.140912 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-22 19:54:18.140921 | orchestrator | 2025-06-22 19:54:18.140931 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2025-06-22 19:54:18.140941 | orchestrator | Sunday 22 June 2025 19:52:54 +0000 (0:00:01.250) 0:03:04.655 *********** 2025-06-22 19:54:18.140951 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:54:18.140960 | orchestrator | 2025-06-22 19:54:18.140970 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2025-06-22 19:54:18.140980 | orchestrator | Sunday 22 June 2025 19:52:54 +0000 (0:00:00.244) 0:03:04.899 *********** 2025-06-22 19:54:18.140989 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:54:18.140999 | orchestrator | 2025-06-22 19:54:18.141009 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2025-06-22 19:54:18.141019 | orchestrator | Sunday 22 June 2025 19:52:54 +0000 (0:00:00.264) 0:03:05.163 *********** 2025-06-22 19:54:18.141028 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:54:18.141038 | orchestrator | 2025-06-22 19:54:18.141048 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2025-06-22 19:54:18.141063 | orchestrator | Sunday 22 June 2025 19:52:54 +0000 (0:00:00.231) 0:03:05.395 *********** 2025-06-22 19:54:18.141073 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:54:18.141083 | orchestrator | 2025-06-22 19:54:18.141092 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2025-06-22 19:54:18.141102 | orchestrator | Sunday 22 June 2025 19:52:55 +0000 (0:00:00.272) 0:03:05.668 *********** 2025-06-22 19:54:18.141112 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-06-22 19:54:18.141121 | orchestrator | 2025-06-22 19:54:18.141131 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2025-06-22 19:54:18.141141 | orchestrator | Sunday 22 June 2025 19:53:00 +0000 (0:00:05.571) 0:03:11.239 *********** 2025-06-22 19:54:18.141151 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2025-06-22 19:54:18.141160 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2025-06-22 19:54:18.141170 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2025-06-22 19:54:18.141180 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2025-06-22 19:54:18.141190 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2025-06-22 19:54:18.141199 | orchestrator | 2025-06-22 19:54:18.141209 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2025-06-22 19:54:18.141219 | orchestrator | Sunday 22 June 2025 19:53:47 +0000 (0:00:47.038) 0:03:58.278 *********** 2025-06-22 19:54:18.141229 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-22 19:54:18.141238 | orchestrator | 2025-06-22 19:54:18.141248 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2025-06-22 19:54:18.141258 | orchestrator | Sunday 22 June 2025 19:53:49 +0000 (0:00:01.225) 0:03:59.503 *********** 2025-06-22 19:54:18.141267 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-06-22 19:54:18.141277 | orchestrator | 2025-06-22 19:54:18.141286 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2025-06-22 19:54:18.141296 | orchestrator | Sunday 22 June 2025 19:53:50 +0000 (0:00:01.553) 0:04:01.056 *********** 2025-06-22 19:54:18.141306 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-06-22 19:54:18.141350 | orchestrator | 2025-06-22 19:54:18.141360 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2025-06-22 19:54:18.141370 | orchestrator | Sunday 22 June 2025 19:53:52 +0000 (0:00:01.419) 0:04:02.476 *********** 2025-06-22 19:54:18.141380 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:54:18.141389 | orchestrator | 2025-06-22 19:54:18.141399 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2025-06-22 19:54:18.141409 | orchestrator | Sunday 22 June 2025 19:53:52 +0000 (0:00:00.204) 0:04:02.681 *********** 2025-06-22 19:54:18.141419 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2025-06-22 19:54:18.141428 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2025-06-22 19:54:18.141438 | orchestrator | 2025-06-22 19:54:18.141448 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2025-06-22 19:54:18.141458 | orchestrator | Sunday 22 June 2025 19:53:54 +0000 (0:00:02.069) 0:04:04.750 *********** 2025-06-22 19:54:18.141468 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:54:18.141477 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:54:18.141487 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:54:18.141497 | orchestrator | 2025-06-22 19:54:18.141506 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2025-06-22 19:54:18.141516 | orchestrator | Sunday 22 June 2025 19:53:54 +0000 (0:00:00.369) 0:04:05.120 *********** 2025-06-22 19:54:18.141526 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:54:18.141535 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:54:18.141549 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:54:18.141559 | orchestrator | 2025-06-22 19:54:18.141569 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2025-06-22 19:54:18.141583 | orchestrator | 2025-06-22 19:54:18.141592 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2025-06-22 19:54:18.141602 | orchestrator | Sunday 22 June 2025 19:53:55 +0000 (0:00:00.968) 0:04:06.089 *********** 2025-06-22 19:54:18.141612 | orchestrator | ok: [testbed-manager] 2025-06-22 19:54:18.141622 | orchestrator | 2025-06-22 19:54:18.141637 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2025-06-22 19:54:18.141647 | orchestrator | Sunday 22 June 2025 19:53:55 +0000 (0:00:00.274) 0:04:06.364 *********** 2025-06-22 19:54:18.141657 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2025-06-22 19:54:18.141666 | orchestrator | 2025-06-22 19:54:18.141676 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2025-06-22 19:54:18.141686 | orchestrator | Sunday 22 June 2025 19:53:56 +0000 (0:00:00.221) 0:04:06.585 *********** 2025-06-22 19:54:18.141696 | orchestrator | changed: [testbed-manager] 2025-06-22 19:54:18.141705 | orchestrator | 2025-06-22 19:54:18.141715 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2025-06-22 19:54:18.141725 | orchestrator | 2025-06-22 19:54:18.141734 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2025-06-22 19:54:18.141744 | orchestrator | Sunday 22 June 2025 19:54:01 +0000 (0:00:05.215) 0:04:11.801 *********** 2025-06-22 19:54:18.141752 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:54:18.141760 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:54:18.141768 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:54:18.141776 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:54:18.141783 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:54:18.141791 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:54:18.141799 | orchestrator | 2025-06-22 19:54:18.141807 | orchestrator | TASK [Manage labels] *********************************************************** 2025-06-22 19:54:18.141815 | orchestrator | Sunday 22 June 2025 19:54:02 +0000 (0:00:00.890) 0:04:12.691 *********** 2025-06-22 19:54:18.141823 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-06-22 19:54:18.141831 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-06-22 19:54:18.141839 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-06-22 19:54:18.141847 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-06-22 19:54:18.141855 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-06-22 19:54:18.141863 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-06-22 19:54:18.141870 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-06-22 19:54:18.141878 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-06-22 19:54:18.141886 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-06-22 19:54:18.141894 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2025-06-22 19:54:18.141902 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2025-06-22 19:54:18.141910 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2025-06-22 19:54:18.141918 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-06-22 19:54:18.141925 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-06-22 19:54:18.141933 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-06-22 19:54:18.141941 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-06-22 19:54:18.141949 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-06-22 19:54:18.141961 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-06-22 19:54:18.141969 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-06-22 19:54:18.141977 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-06-22 19:54:18.141985 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-06-22 19:54:18.141993 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-06-22 19:54:18.142001 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-06-22 19:54:18.142009 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-06-22 19:54:18.142039 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-06-22 19:54:18.142048 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-06-22 19:54:18.142055 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-06-22 19:54:18.142063 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-06-22 19:54:18.142071 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-06-22 19:54:18.142079 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-06-22 19:54:18.142087 | orchestrator | 2025-06-22 19:54:18.142098 | orchestrator | TASK [Manage annotations] ****************************************************** 2025-06-22 19:54:18.142106 | orchestrator | Sunday 22 June 2025 19:54:16 +0000 (0:00:14.309) 0:04:27.000 *********** 2025-06-22 19:54:18.142114 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:54:18.142122 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:54:18.142130 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:54:18.142138 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:54:18.142151 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:54:18.142159 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:54:18.142167 | orchestrator | 2025-06-22 19:54:18.142175 | orchestrator | TASK [Manage taints] *********************************************************** 2025-06-22 19:54:18.142183 | orchestrator | Sunday 22 June 2025 19:54:17 +0000 (0:00:00.713) 0:04:27.714 *********** 2025-06-22 19:54:18.142191 | orchestrator | skipping: [testbed-node-3] 2025-06-22 19:54:18.142199 | orchestrator | skipping: [testbed-node-4] 2025-06-22 19:54:18.142206 | orchestrator | skipping: [testbed-node-5] 2025-06-22 19:54:18.142214 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:54:18.142222 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:54:18.142230 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:54:18.142238 | orchestrator | 2025-06-22 19:54:18.142246 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:54:18.142254 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:54:18.142263 | orchestrator | testbed-node-0 : ok=46  changed=21  unreachable=0 failed=0 skipped=27  rescued=0 ignored=0 2025-06-22 19:54:18.142271 | orchestrator | testbed-node-1 : ok=34  changed=14  unreachable=0 failed=0 skipped=24  rescued=0 ignored=0 2025-06-22 19:54:18.142279 | orchestrator | testbed-node-2 : ok=34  changed=14  unreachable=0 failed=0 skipped=24  rescued=0 ignored=0 2025-06-22 19:54:18.142287 | orchestrator | testbed-node-3 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-06-22 19:54:18.142295 | orchestrator | testbed-node-4 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-06-22 19:54:18.142308 | orchestrator | testbed-node-5 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-06-22 19:54:18.142326 | orchestrator | 2025-06-22 19:54:18.142335 | orchestrator | 2025-06-22 19:54:18.142343 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:54:18.142350 | orchestrator | Sunday 22 June 2025 19:54:17 +0000 (0:00:00.492) 0:04:28.207 *********** 2025-06-22 19:54:18.142358 | orchestrator | =============================================================================== 2025-06-22 19:54:18.142366 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 55.75s 2025-06-22 19:54:18.142374 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 47.04s 2025-06-22 19:54:18.142382 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 15.66s 2025-06-22 19:54:18.142390 | orchestrator | Manage labels ---------------------------------------------------------- 14.31s 2025-06-22 19:54:18.142398 | orchestrator | kubectl : Install required packages ------------------------------------ 13.65s 2025-06-22 19:54:18.142406 | orchestrator | k3s_agent : Manage k3s service ------------------------------------------ 8.68s 2025-06-22 19:54:18.142414 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 7.82s 2025-06-22 19:54:18.142422 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 7.18s 2025-06-22 19:54:18.142430 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 5.57s 2025-06-22 19:54:18.142437 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.22s 2025-06-22 19:54:18.142446 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.07s 2025-06-22 19:54:18.142454 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.82s 2025-06-22 19:54:18.142461 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.65s 2025-06-22 19:54:18.142469 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 2.62s 2025-06-22 19:54:18.142477 | orchestrator | k3s_prereq : Add /usr/local/bin to sudo secure_path --------------------- 2.23s 2025-06-22 19:54:18.142485 | orchestrator | k3s_server_post : Wait for connectivity to kube VIP --------------------- 2.14s 2025-06-22 19:54:18.142492 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 2.07s 2025-06-22 19:54:18.142500 | orchestrator | k3s_prereq : Enable IPv6 forwarding ------------------------------------- 1.89s 2025-06-22 19:54:18.142508 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 1.87s 2025-06-22 19:54:18.142516 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 1.85s 2025-06-22 19:54:18.142524 | orchestrator | 2025-06-22 19:54:18 | INFO  | Task 5d805b9a-719c-44ed-bde2-6ef304fdf04a is in state STARTED 2025-06-22 19:54:18.142535 | orchestrator | 2025-06-22 19:54:18 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:54:18.142543 | orchestrator | 2025-06-22 19:54:18 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:54:18.142552 | orchestrator | 2025-06-22 19:54:18 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:54:21.176993 | orchestrator | 2025-06-22 19:54:21 | INFO  | Task b90c0e86-0c9d-4e4d-9f81-a9dd17fe71d6 is in state STARTED 2025-06-22 19:54:21.177200 | orchestrator | 2025-06-22 19:54:21 | INFO  | Task a1ef561e-e92a-4b01-9994-669ad5bf3cd6 is in state STARTED 2025-06-22 19:54:21.177975 | orchestrator | 2025-06-22 19:54:21 | INFO  | Task 6f35f6e4-f8ef-4d0f-a955-ab2373ef9655 is in state STARTED 2025-06-22 19:54:21.178779 | orchestrator | 2025-06-22 19:54:21 | INFO  | Task 5d805b9a-719c-44ed-bde2-6ef304fdf04a is in state STARTED 2025-06-22 19:54:21.181255 | orchestrator | 2025-06-22 19:54:21 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:54:21.182009 | orchestrator | 2025-06-22 19:54:21 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:54:21.182067 | orchestrator | 2025-06-22 19:54:21 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:54:24.218396 | orchestrator | 2025-06-22 19:54:24 | INFO  | Task b90c0e86-0c9d-4e4d-9f81-a9dd17fe71d6 is in state STARTED 2025-06-22 19:54:24.218490 | orchestrator | 2025-06-22 19:54:24 | INFO  | Task a1ef561e-e92a-4b01-9994-669ad5bf3cd6 is in state STARTED 2025-06-22 19:54:24.219361 | orchestrator | 2025-06-22 19:54:24 | INFO  | Task 6f35f6e4-f8ef-4d0f-a955-ab2373ef9655 is in state STARTED 2025-06-22 19:54:24.220752 | orchestrator | 2025-06-22 19:54:24 | INFO  | Task 5d805b9a-719c-44ed-bde2-6ef304fdf04a is in state STARTED 2025-06-22 19:54:24.223555 | orchestrator | 2025-06-22 19:54:24 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:54:24.225100 | orchestrator | 2025-06-22 19:54:24 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:54:24.225157 | orchestrator | 2025-06-22 19:54:24 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:54:27.271234 | orchestrator | 2025-06-22 19:54:27 | INFO  | Task b90c0e86-0c9d-4e4d-9f81-a9dd17fe71d6 is in state STARTED 2025-06-22 19:54:27.271345 | orchestrator | 2025-06-22 19:54:27 | INFO  | Task a1ef561e-e92a-4b01-9994-669ad5bf3cd6 is in state SUCCESS 2025-06-22 19:54:27.275224 | orchestrator | 2025-06-22 19:54:27 | INFO  | Task 6f35f6e4-f8ef-4d0f-a955-ab2373ef9655 is in state STARTED 2025-06-22 19:54:27.276966 | orchestrator | 2025-06-22 19:54:27 | INFO  | Task 5d805b9a-719c-44ed-bde2-6ef304fdf04a is in state STARTED 2025-06-22 19:54:27.283019 | orchestrator | 2025-06-22 19:54:27 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:54:27.284775 | orchestrator | 2025-06-22 19:54:27 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:54:27.285351 | orchestrator | 2025-06-22 19:54:27 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:54:30.319057 | orchestrator | 2025-06-22 19:54:30 | INFO  | Task b90c0e86-0c9d-4e4d-9f81-a9dd17fe71d6 is in state SUCCESS 2025-06-22 19:54:30.322216 | orchestrator | 2025-06-22 19:54:30 | INFO  | Task 6f35f6e4-f8ef-4d0f-a955-ab2373ef9655 is in state STARTED 2025-06-22 19:54:30.324654 | orchestrator | 2025-06-22 19:54:30 | INFO  | Task 5d805b9a-719c-44ed-bde2-6ef304fdf04a is in state STARTED 2025-06-22 19:54:30.327264 | orchestrator | 2025-06-22 19:54:30 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:54:30.329137 | orchestrator | 2025-06-22 19:54:30 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:54:30.329436 | orchestrator | 2025-06-22 19:54:30 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:54:33.366716 | orchestrator | 2025-06-22 19:54:33 | INFO  | Task 6f35f6e4-f8ef-4d0f-a955-ab2373ef9655 is in state STARTED 2025-06-22 19:54:33.367048 | orchestrator | 2025-06-22 19:54:33 | INFO  | Task 5d805b9a-719c-44ed-bde2-6ef304fdf04a is in state STARTED 2025-06-22 19:54:33.367981 | orchestrator | 2025-06-22 19:54:33 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:54:33.368638 | orchestrator | 2025-06-22 19:54:33 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:54:33.368655 | orchestrator | 2025-06-22 19:54:33 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:54:36.420890 | orchestrator | 2025-06-22 19:54:36 | INFO  | Task 6f35f6e4-f8ef-4d0f-a955-ab2373ef9655 is in state STARTED 2025-06-22 19:54:36.425587 | orchestrator | 2025-06-22 19:54:36 | INFO  | Task 5d805b9a-719c-44ed-bde2-6ef304fdf04a is in state STARTED 2025-06-22 19:54:36.426335 | orchestrator | 2025-06-22 19:54:36 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:54:36.428782 | orchestrator | 2025-06-22 19:54:36 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:54:36.429106 | orchestrator | 2025-06-22 19:54:36 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:54:39.471294 | orchestrator | 2025-06-22 19:54:39 | INFO  | Task 6f35f6e4-f8ef-4d0f-a955-ab2373ef9655 is in state STARTED 2025-06-22 19:54:39.472344 | orchestrator | 2025-06-22 19:54:39 | INFO  | Task 5d805b9a-719c-44ed-bde2-6ef304fdf04a is in state STARTED 2025-06-22 19:54:39.476211 | orchestrator | 2025-06-22 19:54:39 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:54:39.479854 | orchestrator | 2025-06-22 19:54:39 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:54:39.479895 | orchestrator | 2025-06-22 19:54:39 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:54:42.530244 | orchestrator | 2025-06-22 19:54:42 | INFO  | Task 6f35f6e4-f8ef-4d0f-a955-ab2373ef9655 is in state STARTED 2025-06-22 19:54:42.532084 | orchestrator | 2025-06-22 19:54:42 | INFO  | Task 5d805b9a-719c-44ed-bde2-6ef304fdf04a is in state STARTED 2025-06-22 19:54:42.534004 | orchestrator | 2025-06-22 19:54:42 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:54:42.535544 | orchestrator | 2025-06-22 19:54:42 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:54:42.535568 | orchestrator | 2025-06-22 19:54:42 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:54:45.578261 | orchestrator | 2025-06-22 19:54:45 | INFO  | Task 6f35f6e4-f8ef-4d0f-a955-ab2373ef9655 is in state STARTED 2025-06-22 19:54:45.578399 | orchestrator | 2025-06-22 19:54:45 | INFO  | Task 5d805b9a-719c-44ed-bde2-6ef304fdf04a is in state STARTED 2025-06-22 19:54:45.580035 | orchestrator | 2025-06-22 19:54:45 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:54:45.581420 | orchestrator | 2025-06-22 19:54:45 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:54:45.581689 | orchestrator | 2025-06-22 19:54:45 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:54:48.623896 | orchestrator | 2025-06-22 19:54:48 | INFO  | Task 6f35f6e4-f8ef-4d0f-a955-ab2373ef9655 is in state STARTED 2025-06-22 19:54:48.626391 | orchestrator | 2025-06-22 19:54:48 | INFO  | Task 5d805b9a-719c-44ed-bde2-6ef304fdf04a is in state STARTED 2025-06-22 19:54:48.628763 | orchestrator | 2025-06-22 19:54:48 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:54:48.629299 | orchestrator | 2025-06-22 19:54:48 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:54:48.629354 | orchestrator | 2025-06-22 19:54:48 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:54:51.683629 | orchestrator | 2025-06-22 19:54:51 | INFO  | Task 6f35f6e4-f8ef-4d0f-a955-ab2373ef9655 is in state STARTED 2025-06-22 19:54:51.685617 | orchestrator | 2025-06-22 19:54:51 | INFO  | Task 5d805b9a-719c-44ed-bde2-6ef304fdf04a is in state STARTED 2025-06-22 19:54:51.689465 | orchestrator | 2025-06-22 19:54:51 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:54:51.691861 | orchestrator | 2025-06-22 19:54:51 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:54:51.692207 | orchestrator | 2025-06-22 19:54:51 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:54:54.725429 | orchestrator | 2025-06-22 19:54:54 | INFO  | Task 6f35f6e4-f8ef-4d0f-a955-ab2373ef9655 is in state STARTED 2025-06-22 19:54:54.726700 | orchestrator | 2025-06-22 19:54:54 | INFO  | Task 5d805b9a-719c-44ed-bde2-6ef304fdf04a is in state STARTED 2025-06-22 19:54:54.729090 | orchestrator | 2025-06-22 19:54:54 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:54:54.730628 | orchestrator | 2025-06-22 19:54:54 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:54:54.730865 | orchestrator | 2025-06-22 19:54:54 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:54:57.793111 | orchestrator | 2025-06-22 19:54:57 | INFO  | Task 6f35f6e4-f8ef-4d0f-a955-ab2373ef9655 is in state STARTED 2025-06-22 19:54:57.793382 | orchestrator | 2025-06-22 19:54:57 | INFO  | Task 5d805b9a-719c-44ed-bde2-6ef304fdf04a is in state STARTED 2025-06-22 19:54:57.794101 | orchestrator | 2025-06-22 19:54:57 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:54:57.794945 | orchestrator | 2025-06-22 19:54:57 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:54:57.794972 | orchestrator | 2025-06-22 19:54:57 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:55:00.835696 | orchestrator | 2025-06-22 19:55:00 | INFO  | Task 6f35f6e4-f8ef-4d0f-a955-ab2373ef9655 is in state STARTED 2025-06-22 19:55:00.837895 | orchestrator | 2025-06-22 19:55:00 | INFO  | Task 5d805b9a-719c-44ed-bde2-6ef304fdf04a is in state STARTED 2025-06-22 19:55:00.840381 | orchestrator | 2025-06-22 19:55:00 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:55:00.842819 | orchestrator | 2025-06-22 19:55:00 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:55:00.842965 | orchestrator | 2025-06-22 19:55:00 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:55:03.893777 | orchestrator | 2025-06-22 19:55:03 | INFO  | Task 6f35f6e4-f8ef-4d0f-a955-ab2373ef9655 is in state STARTED 2025-06-22 19:55:03.895211 | orchestrator | 2025-06-22 19:55:03 | INFO  | Task 5d805b9a-719c-44ed-bde2-6ef304fdf04a is in state STARTED 2025-06-22 19:55:03.895846 | orchestrator | 2025-06-22 19:55:03 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:55:03.897688 | orchestrator | 2025-06-22 19:55:03 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:55:03.897709 | orchestrator | 2025-06-22 19:55:03 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:55:06.939826 | orchestrator | 2025-06-22 19:55:06 | INFO  | Task 6f35f6e4-f8ef-4d0f-a955-ab2373ef9655 is in state STARTED 2025-06-22 19:55:06.942618 | orchestrator | 2025-06-22 19:55:06 | INFO  | Task 5d805b9a-719c-44ed-bde2-6ef304fdf04a is in state STARTED 2025-06-22 19:55:06.943240 | orchestrator | 2025-06-22 19:55:06 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:55:06.944449 | orchestrator | 2025-06-22 19:55:06 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:55:06.944479 | orchestrator | 2025-06-22 19:55:06 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:55:09.992382 | orchestrator | 2025-06-22 19:55:09 | INFO  | Task 6f35f6e4-f8ef-4d0f-a955-ab2373ef9655 is in state STARTED 2025-06-22 19:55:09.993766 | orchestrator | 2025-06-22 19:55:09 | INFO  | Task 5d805b9a-719c-44ed-bde2-6ef304fdf04a is in state STARTED 2025-06-22 19:55:09.996264 | orchestrator | 2025-06-22 19:55:09 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:55:09.997885 | orchestrator | 2025-06-22 19:55:09 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:55:09.997914 | orchestrator | 2025-06-22 19:55:09 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:55:13.040294 | orchestrator | 2025-06-22 19:55:13 | INFO  | Task 6f35f6e4-f8ef-4d0f-a955-ab2373ef9655 is in state STARTED 2025-06-22 19:55:13.042204 | orchestrator | 2025-06-22 19:55:13 | INFO  | Task 5d805b9a-719c-44ed-bde2-6ef304fdf04a is in state STARTED 2025-06-22 19:55:13.043454 | orchestrator | 2025-06-22 19:55:13 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:55:13.045622 | orchestrator | 2025-06-22 19:55:13 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:55:13.045690 | orchestrator | 2025-06-22 19:55:13 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:55:16.091737 | orchestrator | 2025-06-22 19:55:16 | INFO  | Task 6f35f6e4-f8ef-4d0f-a955-ab2373ef9655 is in state STARTED 2025-06-22 19:55:16.094153 | orchestrator | 2025-06-22 19:55:16 | INFO  | Task 5d805b9a-719c-44ed-bde2-6ef304fdf04a is in state STARTED 2025-06-22 19:55:16.095612 | orchestrator | 2025-06-22 19:55:16 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:55:16.097299 | orchestrator | 2025-06-22 19:55:16 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:55:16.097926 | orchestrator | 2025-06-22 19:55:16 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:55:19.126602 | orchestrator | 2025-06-22 19:55:19 | INFO  | Task 6f35f6e4-f8ef-4d0f-a955-ab2373ef9655 is in state STARTED 2025-06-22 19:55:19.126802 | orchestrator | 2025-06-22 19:55:19 | INFO  | Task 5d805b9a-719c-44ed-bde2-6ef304fdf04a is in state STARTED 2025-06-22 19:55:19.127452 | orchestrator | 2025-06-22 19:55:19 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:55:19.128086 | orchestrator | 2025-06-22 19:55:19 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:55:19.128109 | orchestrator | 2025-06-22 19:55:19 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:55:22.167695 | orchestrator | 2025-06-22 19:55:22 | INFO  | Task 6f35f6e4-f8ef-4d0f-a955-ab2373ef9655 is in state STARTED 2025-06-22 19:55:22.168503 | orchestrator | 2025-06-22 19:55:22 | INFO  | Task 5d805b9a-719c-44ed-bde2-6ef304fdf04a is in state STARTED 2025-06-22 19:55:22.168989 | orchestrator | 2025-06-22 19:55:22 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:55:22.170174 | orchestrator | 2025-06-22 19:55:22 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:55:22.170202 | orchestrator | 2025-06-22 19:55:22 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:55:25.223114 | orchestrator | 2025-06-22 19:55:25 | INFO  | Task 6f35f6e4-f8ef-4d0f-a955-ab2373ef9655 is in state STARTED 2025-06-22 19:55:25.223446 | orchestrator | 2025-06-22 19:55:25 | INFO  | Task 5d805b9a-719c-44ed-bde2-6ef304fdf04a is in state STARTED 2025-06-22 19:55:25.226535 | orchestrator | 2025-06-22 19:55:25 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:55:25.228589 | orchestrator | 2025-06-22 19:55:25 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:55:25.228619 | orchestrator | 2025-06-22 19:55:25 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:55:28.279231 | orchestrator | 2025-06-22 19:55:28 | INFO  | Task 6f35f6e4-f8ef-4d0f-a955-ab2373ef9655 is in state STARTED 2025-06-22 19:55:28.279412 | orchestrator | 2025-06-22 19:55:28 | INFO  | Task 5d805b9a-719c-44ed-bde2-6ef304fdf04a is in state STARTED 2025-06-22 19:55:28.280820 | orchestrator | 2025-06-22 19:55:28 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:55:28.282742 | orchestrator | 2025-06-22 19:55:28 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:55:28.282787 | orchestrator | 2025-06-22 19:55:28 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:55:31.333132 | orchestrator | 2025-06-22 19:55:31 | INFO  | Task 6f35f6e4-f8ef-4d0f-a955-ab2373ef9655 is in state STARTED 2025-06-22 19:55:31.334467 | orchestrator | 2025-06-22 19:55:31 | INFO  | Task 5d805b9a-719c-44ed-bde2-6ef304fdf04a is in state STARTED 2025-06-22 19:55:31.337861 | orchestrator | 2025-06-22 19:55:31 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:55:31.338564 | orchestrator | 2025-06-22 19:55:31 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:55:31.338991 | orchestrator | 2025-06-22 19:55:31 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:55:34.376378 | orchestrator | 2025-06-22 19:55:34 | INFO  | Task 6f35f6e4-f8ef-4d0f-a955-ab2373ef9655 is in state STARTED 2025-06-22 19:55:34.376880 | orchestrator | 2025-06-22 19:55:34 | INFO  | Task 5d805b9a-719c-44ed-bde2-6ef304fdf04a is in state STARTED 2025-06-22 19:55:34.377734 | orchestrator | 2025-06-22 19:55:34 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:55:34.379176 | orchestrator | 2025-06-22 19:55:34 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:55:34.379207 | orchestrator | 2025-06-22 19:55:34 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:55:37.411954 | orchestrator | 2025-06-22 19:55:37 | INFO  | Task 6f35f6e4-f8ef-4d0f-a955-ab2373ef9655 is in state SUCCESS 2025-06-22 19:55:37.413236 | orchestrator | 2025-06-22 19:55:37.413278 | orchestrator | 2025-06-22 19:55:37.413291 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2025-06-22 19:55:37.413302 | orchestrator | 2025-06-22 19:55:37.413347 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-06-22 19:55:37.413359 | orchestrator | Sunday 22 June 2025 19:54:21 +0000 (0:00:00.179) 0:00:00.179 *********** 2025-06-22 19:55:37.413371 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-06-22 19:55:37.413382 | orchestrator | 2025-06-22 19:55:37.413392 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-06-22 19:55:37.413403 | orchestrator | Sunday 22 June 2025 19:54:22 +0000 (0:00:00.813) 0:00:00.993 *********** 2025-06-22 19:55:37.413414 | orchestrator | changed: [testbed-manager] 2025-06-22 19:55:37.413426 | orchestrator | 2025-06-22 19:55:37.413437 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2025-06-22 19:55:37.413447 | orchestrator | Sunday 22 June 2025 19:54:23 +0000 (0:00:01.129) 0:00:02.122 *********** 2025-06-22 19:55:37.413458 | orchestrator | changed: [testbed-manager] 2025-06-22 19:55:37.413469 | orchestrator | 2025-06-22 19:55:37.413487 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:55:37.413499 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:55:37.413511 | orchestrator | 2025-06-22 19:55:37.413522 | orchestrator | 2025-06-22 19:55:37.413533 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:55:37.413544 | orchestrator | Sunday 22 June 2025 19:54:24 +0000 (0:00:00.475) 0:00:02.598 *********** 2025-06-22 19:55:37.413555 | orchestrator | =============================================================================== 2025-06-22 19:55:37.413583 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.13s 2025-06-22 19:55:37.413595 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.81s 2025-06-22 19:55:37.413605 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.48s 2025-06-22 19:55:37.413616 | orchestrator | 2025-06-22 19:55:37.413627 | orchestrator | 2025-06-22 19:55:37.413638 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-06-22 19:55:37.413649 | orchestrator | 2025-06-22 19:55:37.413660 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-06-22 19:55:37.413670 | orchestrator | Sunday 22 June 2025 19:54:22 +0000 (0:00:00.226) 0:00:00.226 *********** 2025-06-22 19:55:37.413681 | orchestrator | ok: [testbed-manager] 2025-06-22 19:55:37.413692 | orchestrator | 2025-06-22 19:55:37.413703 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-06-22 19:55:37.413714 | orchestrator | Sunday 22 June 2025 19:54:22 +0000 (0:00:00.666) 0:00:00.893 *********** 2025-06-22 19:55:37.413725 | orchestrator | ok: [testbed-manager] 2025-06-22 19:55:37.413736 | orchestrator | 2025-06-22 19:55:37.413747 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-06-22 19:55:37.413758 | orchestrator | Sunday 22 June 2025 19:54:23 +0000 (0:00:00.516) 0:00:01.409 *********** 2025-06-22 19:55:37.413769 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-06-22 19:55:37.413779 | orchestrator | 2025-06-22 19:55:37.413791 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-06-22 19:55:37.413801 | orchestrator | Sunday 22 June 2025 19:54:23 +0000 (0:00:00.686) 0:00:02.096 *********** 2025-06-22 19:55:37.413812 | orchestrator | changed: [testbed-manager] 2025-06-22 19:55:37.413823 | orchestrator | 2025-06-22 19:55:37.413836 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-06-22 19:55:37.413848 | orchestrator | Sunday 22 June 2025 19:54:25 +0000 (0:00:01.189) 0:00:03.286 *********** 2025-06-22 19:55:37.413860 | orchestrator | changed: [testbed-manager] 2025-06-22 19:55:37.413872 | orchestrator | 2025-06-22 19:55:37.413884 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-06-22 19:55:37.413897 | orchestrator | Sunday 22 June 2025 19:54:25 +0000 (0:00:00.793) 0:00:04.079 *********** 2025-06-22 19:55:37.413909 | orchestrator | changed: [testbed-manager -> localhost] 2025-06-22 19:55:37.413921 | orchestrator | 2025-06-22 19:55:37.413933 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-06-22 19:55:37.413946 | orchestrator | Sunday 22 June 2025 19:54:27 +0000 (0:00:01.515) 0:00:05.594 *********** 2025-06-22 19:55:37.413958 | orchestrator | changed: [testbed-manager -> localhost] 2025-06-22 19:55:37.413971 | orchestrator | 2025-06-22 19:55:37.413983 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-06-22 19:55:37.413995 | orchestrator | Sunday 22 June 2025 19:54:28 +0000 (0:00:00.881) 0:00:06.476 *********** 2025-06-22 19:55:37.414007 | orchestrator | ok: [testbed-manager] 2025-06-22 19:55:37.414090 | orchestrator | 2025-06-22 19:55:37.414107 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-06-22 19:55:37.414120 | orchestrator | Sunday 22 June 2025 19:54:28 +0000 (0:00:00.463) 0:00:06.939 *********** 2025-06-22 19:55:37.414132 | orchestrator | ok: [testbed-manager] 2025-06-22 19:55:37.414144 | orchestrator | 2025-06-22 19:55:37.414157 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:55:37.414169 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:55:37.414181 | orchestrator | 2025-06-22 19:55:37.414193 | orchestrator | 2025-06-22 19:55:37.414204 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:55:37.414215 | orchestrator | Sunday 22 June 2025 19:54:29 +0000 (0:00:00.310) 0:00:07.250 *********** 2025-06-22 19:55:37.414226 | orchestrator | =============================================================================== 2025-06-22 19:55:37.414244 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.52s 2025-06-22 19:55:37.414255 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.19s 2025-06-22 19:55:37.414266 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.88s 2025-06-22 19:55:37.414299 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.79s 2025-06-22 19:55:37.414338 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.69s 2025-06-22 19:55:37.414357 | orchestrator | Get home directory of operator user ------------------------------------- 0.67s 2025-06-22 19:55:37.414376 | orchestrator | Create .kube directory -------------------------------------------------- 0.52s 2025-06-22 19:55:37.414393 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.46s 2025-06-22 19:55:37.414404 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.31s 2025-06-22 19:55:37.414415 | orchestrator | 2025-06-22 19:55:37.414425 | orchestrator | 2025-06-22 19:55:37.414436 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-06-22 19:55:37.414447 | orchestrator | 2025-06-22 19:55:37.414458 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-06-22 19:55:37.414469 | orchestrator | Sunday 22 June 2025 19:53:10 +0000 (0:00:00.106) 0:00:00.106 *********** 2025-06-22 19:55:37.414479 | orchestrator | ok: [localhost] => { 2025-06-22 19:55:37.414498 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-06-22 19:55:37.414509 | orchestrator | } 2025-06-22 19:55:37.414521 | orchestrator | 2025-06-22 19:55:37.414584 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-06-22 19:55:37.414597 | orchestrator | Sunday 22 June 2025 19:53:10 +0000 (0:00:00.064) 0:00:00.171 *********** 2025-06-22 19:55:37.414610 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-06-22 19:55:37.414622 | orchestrator | ...ignoring 2025-06-22 19:55:37.414633 | orchestrator | 2025-06-22 19:55:37.414644 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-06-22 19:55:37.414655 | orchestrator | Sunday 22 June 2025 19:53:14 +0000 (0:00:03.300) 0:00:03.472 *********** 2025-06-22 19:55:37.414666 | orchestrator | skipping: [localhost] 2025-06-22 19:55:37.414676 | orchestrator | 2025-06-22 19:55:37.414688 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-06-22 19:55:37.414701 | orchestrator | Sunday 22 June 2025 19:53:14 +0000 (0:00:00.069) 0:00:03.541 *********** 2025-06-22 19:55:37.414721 | orchestrator | ok: [localhost] 2025-06-22 19:55:37.414740 | orchestrator | 2025-06-22 19:55:37.414759 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 19:55:37.414778 | orchestrator | 2025-06-22 19:55:37.414797 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 19:55:37.414808 | orchestrator | Sunday 22 June 2025 19:53:14 +0000 (0:00:00.241) 0:00:03.783 *********** 2025-06-22 19:55:37.414819 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:55:37.414829 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:55:37.414840 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:55:37.414851 | orchestrator | 2025-06-22 19:55:37.414862 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 19:55:37.414873 | orchestrator | Sunday 22 June 2025 19:53:14 +0000 (0:00:00.460) 0:00:04.244 *********** 2025-06-22 19:55:37.414883 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-06-22 19:55:37.414895 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-06-22 19:55:37.414905 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-06-22 19:55:37.414917 | orchestrator | 2025-06-22 19:55:37.414927 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-06-22 19:55:37.414938 | orchestrator | 2025-06-22 19:55:37.414949 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-06-22 19:55:37.414969 | orchestrator | Sunday 22 June 2025 19:53:15 +0000 (0:00:00.931) 0:00:05.175 *********** 2025-06-22 19:55:37.414980 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:55:37.414991 | orchestrator | 2025-06-22 19:55:37.415002 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-06-22 19:55:37.415013 | orchestrator | Sunday 22 June 2025 19:53:16 +0000 (0:00:00.866) 0:00:06.041 *********** 2025-06-22 19:55:37.415023 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:55:37.415034 | orchestrator | 2025-06-22 19:55:37.415045 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-06-22 19:55:37.415056 | orchestrator | Sunday 22 June 2025 19:53:17 +0000 (0:00:01.028) 0:00:07.070 *********** 2025-06-22 19:55:37.415067 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:55:37.415078 | orchestrator | 2025-06-22 19:55:37.415088 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-06-22 19:55:37.415099 | orchestrator | Sunday 22 June 2025 19:53:18 +0000 (0:00:00.381) 0:00:07.451 *********** 2025-06-22 19:55:37.415110 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:55:37.415121 | orchestrator | 2025-06-22 19:55:37.415132 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-06-22 19:55:37.415143 | orchestrator | Sunday 22 June 2025 19:53:18 +0000 (0:00:00.402) 0:00:07.854 *********** 2025-06-22 19:55:37.415153 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:55:37.415164 | orchestrator | 2025-06-22 19:55:37.415175 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-06-22 19:55:37.415186 | orchestrator | Sunday 22 June 2025 19:53:18 +0000 (0:00:00.442) 0:00:08.296 *********** 2025-06-22 19:55:37.415197 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:55:37.415208 | orchestrator | 2025-06-22 19:55:37.415219 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-06-22 19:55:37.415230 | orchestrator | Sunday 22 June 2025 19:53:19 +0000 (0:00:00.683) 0:00:08.980 *********** 2025-06-22 19:55:37.415240 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:55:37.415251 | orchestrator | 2025-06-22 19:55:37.415262 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-06-22 19:55:37.415283 | orchestrator | Sunday 22 June 2025 19:53:20 +0000 (0:00:00.995) 0:00:09.975 *********** 2025-06-22 19:55:37.415294 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:55:37.415304 | orchestrator | 2025-06-22 19:55:37.415333 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-06-22 19:55:37.415344 | orchestrator | Sunday 22 June 2025 19:53:21 +0000 (0:00:00.751) 0:00:10.728 *********** 2025-06-22 19:55:37.415355 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:55:37.415366 | orchestrator | 2025-06-22 19:55:37.415377 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-06-22 19:55:37.415388 | orchestrator | Sunday 22 June 2025 19:53:21 +0000 (0:00:00.460) 0:00:11.188 *********** 2025-06-22 19:55:37.415399 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:55:37.415410 | orchestrator | 2025-06-22 19:55:37.415421 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-06-22 19:55:37.415432 | orchestrator | Sunday 22 June 2025 19:53:22 +0000 (0:00:00.492) 0:00:11.680 *********** 2025-06-22 19:55:37.415454 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-22 19:55:37.415478 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-22 19:55:37.415491 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-22 19:55:37.415503 | orchestrator | 2025-06-22 19:55:37.415515 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-06-22 19:55:37.415526 | orchestrator | Sunday 22 June 2025 19:53:23 +0000 (0:00:01.085) 0:00:12.765 *********** 2025-06-22 19:55:37.415551 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-22 19:55:37.415565 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-22 19:55:37.415584 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-22 19:55:37.415596 | orchestrator | 2025-06-22 19:55:37.415608 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-06-22 19:55:37.415620 | orchestrator | Sunday 22 June 2025 19:53:26 +0000 (0:00:02.884) 0:00:15.650 *********** 2025-06-22 19:55:37.415640 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-06-22 19:55:37.415659 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-06-22 19:55:37.415678 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-06-22 19:55:37.415698 | orchestrator | 2025-06-22 19:55:37.415717 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-06-22 19:55:37.415734 | orchestrator | Sunday 22 June 2025 19:53:29 +0000 (0:00:03.077) 0:00:18.727 *********** 2025-06-22 19:55:37.415745 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-06-22 19:55:37.415756 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-06-22 19:55:37.415767 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-06-22 19:55:37.415814 | orchestrator | 2025-06-22 19:55:37.415833 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-06-22 19:55:37.415844 | orchestrator | Sunday 22 June 2025 19:53:31 +0000 (0:00:02.408) 0:00:21.136 *********** 2025-06-22 19:55:37.415855 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-06-22 19:55:37.415866 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-06-22 19:55:37.415877 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-06-22 19:55:37.415888 | orchestrator | 2025-06-22 19:55:37.415911 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-06-22 19:55:37.415931 | orchestrator | Sunday 22 June 2025 19:53:33 +0000 (0:00:01.241) 0:00:22.377 *********** 2025-06-22 19:55:37.415947 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-06-22 19:55:37.415959 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-06-22 19:55:37.415970 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-06-22 19:55:37.415981 | orchestrator | 2025-06-22 19:55:37.415992 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-06-22 19:55:37.416002 | orchestrator | Sunday 22 June 2025 19:53:34 +0000 (0:00:01.757) 0:00:24.135 *********** 2025-06-22 19:55:37.416013 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-06-22 19:55:37.416024 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-06-22 19:55:37.416035 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-06-22 19:55:37.416046 | orchestrator | 2025-06-22 19:55:37.416057 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-06-22 19:55:37.416068 | orchestrator | Sunday 22 June 2025 19:53:36 +0000 (0:00:01.748) 0:00:25.884 *********** 2025-06-22 19:55:37.416079 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-06-22 19:55:37.416090 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-06-22 19:55:37.416101 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-06-22 19:55:37.416112 | orchestrator | 2025-06-22 19:55:37.416122 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-06-22 19:55:37.416194 | orchestrator | Sunday 22 June 2025 19:53:38 +0000 (0:00:01.513) 0:00:27.398 *********** 2025-06-22 19:55:37.416216 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:55:37.416227 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:55:37.416238 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:55:37.416249 | orchestrator | 2025-06-22 19:55:37.416260 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-06-22 19:55:37.416270 | orchestrator | Sunday 22 June 2025 19:53:38 +0000 (0:00:00.588) 0:00:27.986 *********** 2025-06-22 19:55:37.416283 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-22 19:55:37.416307 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-22 19:55:37.416374 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-22 19:55:37.416388 | orchestrator | 2025-06-22 19:55:37.416399 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-06-22 19:55:37.416410 | orchestrator | Sunday 22 June 2025 19:53:40 +0000 (0:00:01.616) 0:00:29.602 *********** 2025-06-22 19:55:37.416421 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:55:37.416432 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:55:37.416443 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:55:37.416454 | orchestrator | 2025-06-22 19:55:37.416464 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-06-22 19:55:37.416475 | orchestrator | Sunday 22 June 2025 19:53:41 +0000 (0:00:01.034) 0:00:30.637 *********** 2025-06-22 19:55:37.416486 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:55:37.416497 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:55:37.416508 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:55:37.416518 | orchestrator | 2025-06-22 19:55:37.416529 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-06-22 19:55:37.416541 | orchestrator | Sunday 22 June 2025 19:53:53 +0000 (0:00:12.399) 0:00:43.037 *********** 2025-06-22 19:55:37.416551 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:55:37.416562 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:55:37.416573 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:55:37.416584 | orchestrator | 2025-06-22 19:55:37.416595 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-06-22 19:55:37.416606 | orchestrator | 2025-06-22 19:55:37.416617 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-06-22 19:55:37.416628 | orchestrator | Sunday 22 June 2025 19:53:54 +0000 (0:00:00.971) 0:00:44.008 *********** 2025-06-22 19:55:37.416639 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:55:37.416650 | orchestrator | 2025-06-22 19:55:37.416660 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-06-22 19:55:37.416671 | orchestrator | Sunday 22 June 2025 19:53:55 +0000 (0:00:00.751) 0:00:44.760 *********** 2025-06-22 19:55:37.416682 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:55:37.416693 | orchestrator | 2025-06-22 19:55:37.416704 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-06-22 19:55:37.416715 | orchestrator | Sunday 22 June 2025 19:53:55 +0000 (0:00:00.210) 0:00:44.971 *********** 2025-06-22 19:55:37.416733 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:55:37.416743 | orchestrator | 2025-06-22 19:55:37.416754 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-06-22 19:55:37.416765 | orchestrator | Sunday 22 June 2025 19:53:57 +0000 (0:00:01.837) 0:00:46.808 *********** 2025-06-22 19:55:37.416776 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:55:37.416787 | orchestrator | 2025-06-22 19:55:37.416798 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-06-22 19:55:37.416809 | orchestrator | 2025-06-22 19:55:37.416820 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-06-22 19:55:37.416832 | orchestrator | Sunday 22 June 2025 19:54:53 +0000 (0:00:56.094) 0:01:42.902 *********** 2025-06-22 19:55:37.416842 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:55:37.416853 | orchestrator | 2025-06-22 19:55:37.416864 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-06-22 19:55:37.416875 | orchestrator | Sunday 22 June 2025 19:54:54 +0000 (0:00:00.670) 0:01:43.573 *********** 2025-06-22 19:55:37.416886 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:55:37.416904 | orchestrator | 2025-06-22 19:55:37.416923 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-06-22 19:55:37.416942 | orchestrator | Sunday 22 June 2025 19:54:54 +0000 (0:00:00.437) 0:01:44.011 *********** 2025-06-22 19:55:37.416960 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:55:37.416980 | orchestrator | 2025-06-22 19:55:37.416999 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-06-22 19:55:37.417016 | orchestrator | Sunday 22 June 2025 19:54:56 +0000 (0:00:01.864) 0:01:45.876 *********** 2025-06-22 19:55:37.417028 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:55:37.417038 | orchestrator | 2025-06-22 19:55:37.417049 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-06-22 19:55:37.417097 | orchestrator | 2025-06-22 19:55:37.417108 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-06-22 19:55:37.417128 | orchestrator | Sunday 22 June 2025 19:55:11 +0000 (0:00:14.807) 0:02:00.683 *********** 2025-06-22 19:55:37.417139 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:55:37.417151 | orchestrator | 2025-06-22 19:55:37.417162 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-06-22 19:55:37.417173 | orchestrator | Sunday 22 June 2025 19:55:11 +0000 (0:00:00.645) 0:02:01.329 *********** 2025-06-22 19:55:37.417185 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:55:37.417204 | orchestrator | 2025-06-22 19:55:37.417224 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-06-22 19:55:37.417242 | orchestrator | Sunday 22 June 2025 19:55:12 +0000 (0:00:00.216) 0:02:01.545 *********** 2025-06-22 19:55:37.417256 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:55:37.417266 | orchestrator | 2025-06-22 19:55:37.417277 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-06-22 19:55:37.417288 | orchestrator | Sunday 22 June 2025 19:55:14 +0000 (0:00:01.911) 0:02:03.456 *********** 2025-06-22 19:55:37.417299 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:55:37.417331 | orchestrator | 2025-06-22 19:55:37.417351 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-06-22 19:55:37.417362 | orchestrator | 2025-06-22 19:55:37.417373 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-06-22 19:55:37.417388 | orchestrator | Sunday 22 June 2025 19:55:32 +0000 (0:00:18.169) 0:02:21.625 *********** 2025-06-22 19:55:37.417406 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:55:37.417425 | orchestrator | 2025-06-22 19:55:37.417444 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-06-22 19:55:37.417463 | orchestrator | Sunday 22 June 2025 19:55:33 +0000 (0:00:00.718) 0:02:22.344 *********** 2025-06-22 19:55:37.417480 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-06-22 19:55:37.417491 | orchestrator | enable_outward_rabbitmq_True 2025-06-22 19:55:37.417564 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-06-22 19:55:37.417577 | orchestrator | outward_rabbitmq_restart 2025-06-22 19:55:37.417588 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:55:37.417599 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:55:37.417610 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:55:37.417621 | orchestrator | 2025-06-22 19:55:37.417633 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-06-22 19:55:37.417643 | orchestrator | skipping: no hosts matched 2025-06-22 19:55:37.417658 | orchestrator | 2025-06-22 19:55:37.417678 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-06-22 19:55:37.417698 | orchestrator | skipping: no hosts matched 2025-06-22 19:55:37.417717 | orchestrator | 2025-06-22 19:55:37.417729 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-06-22 19:55:37.417739 | orchestrator | skipping: no hosts matched 2025-06-22 19:55:37.417750 | orchestrator | 2025-06-22 19:55:37.417761 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:55:37.417773 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-06-22 19:55:37.417784 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-06-22 19:55:37.417795 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:55:37.417806 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 19:55:37.417817 | orchestrator | 2025-06-22 19:55:37.417827 | orchestrator | 2025-06-22 19:55:37.417838 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:55:37.417849 | orchestrator | Sunday 22 June 2025 19:55:35 +0000 (0:00:02.564) 0:02:24.909 *********** 2025-06-22 19:55:37.417860 | orchestrator | =============================================================================== 2025-06-22 19:55:37.417871 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 89.07s 2025-06-22 19:55:37.417881 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------ 12.40s 2025-06-22 19:55:37.417892 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 5.61s 2025-06-22 19:55:37.417903 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.30s 2025-06-22 19:55:37.417913 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 3.08s 2025-06-22 19:55:37.417924 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 2.88s 2025-06-22 19:55:37.417935 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.56s 2025-06-22 19:55:37.417945 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.41s 2025-06-22 19:55:37.417956 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.07s 2025-06-22 19:55:37.417967 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 1.76s 2025-06-22 19:55:37.417977 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.75s 2025-06-22 19:55:37.417988 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.62s 2025-06-22 19:55:37.417999 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.51s 2025-06-22 19:55:37.418009 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.24s 2025-06-22 19:55:37.418055 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.09s 2025-06-22 19:55:37.418075 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 1.03s 2025-06-22 19:55:37.418086 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.03s 2025-06-22 19:55:37.418104 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.00s 2025-06-22 19:55:37.418115 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 0.97s 2025-06-22 19:55:37.418126 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.93s 2025-06-22 19:55:37.418266 | orchestrator | 2025-06-22 19:55:37 | INFO  | Task 5d805b9a-719c-44ed-bde2-6ef304fdf04a is in state STARTED 2025-06-22 19:55:37.418281 | orchestrator | 2025-06-22 19:55:37 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:55:37.418292 | orchestrator | 2025-06-22 19:55:37 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:55:37.418331 | orchestrator | 2025-06-22 19:55:37 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:55:40.465556 | orchestrator | 2025-06-22 19:55:40 | INFO  | Task 5d805b9a-719c-44ed-bde2-6ef304fdf04a is in state STARTED 2025-06-22 19:55:40.471676 | orchestrator | 2025-06-22 19:55:40 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:55:40.473552 | orchestrator | 2025-06-22 19:55:40 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:55:40.473623 | orchestrator | 2025-06-22 19:55:40 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:55:43.529188 | orchestrator | 2025-06-22 19:55:43 | INFO  | Task 5d805b9a-719c-44ed-bde2-6ef304fdf04a is in state STARTED 2025-06-22 19:55:43.529288 | orchestrator | 2025-06-22 19:55:43 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:55:43.531757 | orchestrator | 2025-06-22 19:55:43 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:55:43.531858 | orchestrator | 2025-06-22 19:55:43 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:55:46.571751 | orchestrator | 2025-06-22 19:55:46 | INFO  | Task 5d805b9a-719c-44ed-bde2-6ef304fdf04a is in state STARTED 2025-06-22 19:55:46.573229 | orchestrator | 2025-06-22 19:55:46 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:55:46.575810 | orchestrator | 2025-06-22 19:55:46 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:55:46.575839 | orchestrator | 2025-06-22 19:55:46 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:55:49.621944 | orchestrator | 2025-06-22 19:55:49 | INFO  | Task 5d805b9a-719c-44ed-bde2-6ef304fdf04a is in state STARTED 2025-06-22 19:55:49.625340 | orchestrator | 2025-06-22 19:55:49 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:55:49.628180 | orchestrator | 2025-06-22 19:55:49 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:55:49.628223 | orchestrator | 2025-06-22 19:55:49 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:55:52.665632 | orchestrator | 2025-06-22 19:55:52 | INFO  | Task 5d805b9a-719c-44ed-bde2-6ef304fdf04a is in state STARTED 2025-06-22 19:55:52.669264 | orchestrator | 2025-06-22 19:55:52 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:55:52.670353 | orchestrator | 2025-06-22 19:55:52 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:55:52.670398 | orchestrator | 2025-06-22 19:55:52 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:55:55.714682 | orchestrator | 2025-06-22 19:55:55 | INFO  | Task 5d805b9a-719c-44ed-bde2-6ef304fdf04a is in state STARTED 2025-06-22 19:55:55.718672 | orchestrator | 2025-06-22 19:55:55 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:55:55.720837 | orchestrator | 2025-06-22 19:55:55 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:55:55.720869 | orchestrator | 2025-06-22 19:55:55 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:55:58.763842 | orchestrator | 2025-06-22 19:55:58 | INFO  | Task 5d805b9a-719c-44ed-bde2-6ef304fdf04a is in state STARTED 2025-06-22 19:55:58.764085 | orchestrator | 2025-06-22 19:55:58 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:55:58.765018 | orchestrator | 2025-06-22 19:55:58 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:55:58.765059 | orchestrator | 2025-06-22 19:55:58 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:56:01.810275 | orchestrator | 2025-06-22 19:56:01 | INFO  | Task 5d805b9a-719c-44ed-bde2-6ef304fdf04a is in state STARTED 2025-06-22 19:56:01.812465 | orchestrator | 2025-06-22 19:56:01 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:56:01.814572 | orchestrator | 2025-06-22 19:56:01 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:56:01.814652 | orchestrator | 2025-06-22 19:56:01 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:56:04.867581 | orchestrator | 2025-06-22 19:56:04 | INFO  | Task 5d805b9a-719c-44ed-bde2-6ef304fdf04a is in state STARTED 2025-06-22 19:56:04.869799 | orchestrator | 2025-06-22 19:56:04 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:56:04.871989 | orchestrator | 2025-06-22 19:56:04 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:56:04.872056 | orchestrator | 2025-06-22 19:56:04 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:56:07.919419 | orchestrator | 2025-06-22 19:56:07 | INFO  | Task 5d805b9a-719c-44ed-bde2-6ef304fdf04a is in state STARTED 2025-06-22 19:56:07.920373 | orchestrator | 2025-06-22 19:56:07 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:56:07.926624 | orchestrator | 2025-06-22 19:56:07 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:56:07.926668 | orchestrator | 2025-06-22 19:56:07 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:56:10.975832 | orchestrator | 2025-06-22 19:56:10 | INFO  | Task 5d805b9a-719c-44ed-bde2-6ef304fdf04a is in state STARTED 2025-06-22 19:56:10.980841 | orchestrator | 2025-06-22 19:56:10 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:56:10.981944 | orchestrator | 2025-06-22 19:56:10 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:56:10.981975 | orchestrator | 2025-06-22 19:56:10 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:56:14.038359 | orchestrator | 2025-06-22 19:56:14 | INFO  | Task 5d805b9a-719c-44ed-bde2-6ef304fdf04a is in state STARTED 2025-06-22 19:56:14.043844 | orchestrator | 2025-06-22 19:56:14 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:56:14.043882 | orchestrator | 2025-06-22 19:56:14 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:56:14.043895 | orchestrator | 2025-06-22 19:56:14 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:56:17.099373 | orchestrator | 2025-06-22 19:56:17 | INFO  | Task 5d805b9a-719c-44ed-bde2-6ef304fdf04a is in state STARTED 2025-06-22 19:56:17.101616 | orchestrator | 2025-06-22 19:56:17 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:56:17.103478 | orchestrator | 2025-06-22 19:56:17 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:56:17.103669 | orchestrator | 2025-06-22 19:56:17 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:56:20.147642 | orchestrator | 2025-06-22 19:56:20 | INFO  | Task 5d805b9a-719c-44ed-bde2-6ef304fdf04a is in state STARTED 2025-06-22 19:56:20.149930 | orchestrator | 2025-06-22 19:56:20 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:56:20.153973 | orchestrator | 2025-06-22 19:56:20 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:56:20.154008 | orchestrator | 2025-06-22 19:56:20 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:56:23.197676 | orchestrator | 2025-06-22 19:56:23 | INFO  | Task 5d805b9a-719c-44ed-bde2-6ef304fdf04a is in state STARTED 2025-06-22 19:56:23.200013 | orchestrator | 2025-06-22 19:56:23 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:56:23.200974 | orchestrator | 2025-06-22 19:56:23 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:56:23.201030 | orchestrator | 2025-06-22 19:56:23 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:56:26.243831 | orchestrator | 2025-06-22 19:56:26 | INFO  | Task 5d805b9a-719c-44ed-bde2-6ef304fdf04a is in state STARTED 2025-06-22 19:56:26.247154 | orchestrator | 2025-06-22 19:56:26 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:56:26.249168 | orchestrator | 2025-06-22 19:56:26 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:56:26.249900 | orchestrator | 2025-06-22 19:56:26 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:56:29.294491 | orchestrator | 2025-06-22 19:56:29 | INFO  | Task 5d805b9a-719c-44ed-bde2-6ef304fdf04a is in state SUCCESS 2025-06-22 19:56:29.295952 | orchestrator | 2025-06-22 19:56:29.296015 | orchestrator | 2025-06-22 19:56:29.296039 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 19:56:29.296060 | orchestrator | 2025-06-22 19:56:29.296079 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 19:56:29.296091 | orchestrator | Sunday 22 June 2025 19:54:07 +0000 (0:00:00.217) 0:00:00.217 *********** 2025-06-22 19:56:29.296103 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:56:29.296115 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:56:29.296126 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:56:29.296136 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:56:29.296147 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:56:29.296158 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:56:29.296169 | orchestrator | 2025-06-22 19:56:29.296179 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 19:56:29.296190 | orchestrator | Sunday 22 June 2025 19:54:08 +0000 (0:00:01.075) 0:00:01.292 *********** 2025-06-22 19:56:29.296201 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-06-22 19:56:29.296217 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-06-22 19:56:29.296255 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-06-22 19:56:29.296414 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-06-22 19:56:29.296431 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-06-22 19:56:29.296442 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-06-22 19:56:29.296453 | orchestrator | 2025-06-22 19:56:29.296464 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-06-22 19:56:29.296476 | orchestrator | 2025-06-22 19:56:29.296487 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-06-22 19:56:29.296599 | orchestrator | Sunday 22 June 2025 19:54:10 +0000 (0:00:01.772) 0:00:03.065 *********** 2025-06-22 19:56:29.296618 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:56:29.296657 | orchestrator | 2025-06-22 19:56:29.296671 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-06-22 19:56:29.296691 | orchestrator | Sunday 22 June 2025 19:54:11 +0000 (0:00:01.509) 0:00:04.574 *********** 2025-06-22 19:56:29.296715 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:56:29.296738 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:56:29.296759 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:56:29.296781 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:56:29.296801 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:56:29.296833 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:56:29.296846 | orchestrator | 2025-06-22 19:56:29.296858 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-06-22 19:56:29.296872 | orchestrator | Sunday 22 June 2025 19:54:13 +0000 (0:00:02.096) 0:00:06.671 *********** 2025-06-22 19:56:29.296884 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:56:29.296904 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:56:29.296924 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:56:29.296936 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:56:29.296947 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:56:29.296958 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:56:29.296969 | orchestrator | 2025-06-22 19:56:29.296980 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-06-22 19:56:29.296993 | orchestrator | Sunday 22 June 2025 19:54:16 +0000 (0:00:02.673) 0:00:09.345 *********** 2025-06-22 19:56:29.297011 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:56:29.297028 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:56:29.297057 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:56:29.297083 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:56:29.297115 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:56:29.297135 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:56:29.297154 | orchestrator | 2025-06-22 19:56:29.297172 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-06-22 19:56:29.297192 | orchestrator | Sunday 22 June 2025 19:54:18 +0000 (0:00:01.986) 0:00:11.331 *********** 2025-06-22 19:56:29.297208 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:56:29.297219 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:56:29.297230 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:56:29.297241 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:56:29.297252 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:56:29.297272 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:56:29.297293 | orchestrator | 2025-06-22 19:56:29.297304 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-06-22 19:56:29.297344 | orchestrator | Sunday 22 June 2025 19:54:21 +0000 (0:00:02.801) 0:00:14.133 *********** 2025-06-22 19:56:29.297361 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:56:29.297377 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:56:29.297395 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:56:29.297413 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:56:29.297431 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:56:29.297448 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:56:29.297467 | orchestrator | 2025-06-22 19:56:29.297487 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-06-22 19:56:29.297500 | orchestrator | Sunday 22 June 2025 19:54:23 +0000 (0:00:02.116) 0:00:16.249 *********** 2025-06-22 19:56:29.297513 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:56:29.297533 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:56:29.297550 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:56:29.297568 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:56:29.297587 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:56:29.297605 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:56:29.297624 | orchestrator | 2025-06-22 19:56:29.297636 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-06-22 19:56:29.297647 | orchestrator | Sunday 22 June 2025 19:54:26 +0000 (0:00:02.976) 0:00:19.226 *********** 2025-06-22 19:56:29.297668 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-06-22 19:56:29.297679 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-06-22 19:56:29.297690 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-06-22 19:56:29.297710 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-06-22 19:56:29.297722 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-06-22 19:56:29.297733 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-06-22 19:56:29.297744 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-22 19:56:29.297754 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-22 19:56:29.297765 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-22 19:56:29.297776 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-22 19:56:29.297786 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-22 19:56:29.297848 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-06-22 19:56:29.297862 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-22 19:56:29.297875 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-22 19:56:29.297886 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-22 19:56:29.297897 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-22 19:56:29.297908 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-22 19:56:29.297919 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-06-22 19:56:29.297930 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-22 19:56:29.297942 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-22 19:56:29.297952 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-22 19:56:29.297966 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-22 19:56:29.297985 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-22 19:56:29.298003 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-06-22 19:56:29.298101 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-22 19:56:29.298116 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-22 19:56:29.298127 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-22 19:56:29.298138 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-22 19:56:29.298149 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-22 19:56:29.298169 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-06-22 19:56:29.298180 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-22 19:56:29.298191 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-22 19:56:29.298201 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-22 19:56:29.298212 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-22 19:56:29.298223 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-22 19:56:29.298234 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-06-22 19:56:29.298244 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-06-22 19:56:29.298256 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-06-22 19:56:29.298266 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-06-22 19:56:29.298277 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-06-22 19:56:29.298298 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-06-22 19:56:29.298332 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-06-22 19:56:29.298344 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-06-22 19:56:29.298356 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-06-22 19:56:29.298367 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-06-22 19:56:29.298377 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-06-22 19:56:29.298394 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-06-22 19:56:29.298406 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-06-22 19:56:29.298416 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-06-22 19:56:29.298427 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-06-22 19:56:29.298438 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-06-22 19:56:29.298448 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-06-22 19:56:29.298459 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-06-22 19:56:29.298470 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-06-22 19:56:29.298481 | orchestrator | 2025-06-22 19:56:29.298492 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-22 19:56:29.298503 | orchestrator | Sunday 22 June 2025 19:54:45 +0000 (0:00:19.078) 0:00:38.304 *********** 2025-06-22 19:56:29.298514 | orchestrator | 2025-06-22 19:56:29.298525 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-22 19:56:29.298543 | orchestrator | Sunday 22 June 2025 19:54:45 +0000 (0:00:00.061) 0:00:38.366 *********** 2025-06-22 19:56:29.298554 | orchestrator | 2025-06-22 19:56:29.298565 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-22 19:56:29.298575 | orchestrator | Sunday 22 June 2025 19:54:45 +0000 (0:00:00.058) 0:00:38.425 *********** 2025-06-22 19:56:29.298586 | orchestrator | 2025-06-22 19:56:29.298597 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-22 19:56:29.298608 | orchestrator | Sunday 22 June 2025 19:54:45 +0000 (0:00:00.059) 0:00:38.485 *********** 2025-06-22 19:56:29.298618 | orchestrator | 2025-06-22 19:56:29.298629 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-22 19:56:29.298640 | orchestrator | Sunday 22 June 2025 19:54:45 +0000 (0:00:00.057) 0:00:38.542 *********** 2025-06-22 19:56:29.298651 | orchestrator | 2025-06-22 19:56:29.298661 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-06-22 19:56:29.298672 | orchestrator | Sunday 22 June 2025 19:54:45 +0000 (0:00:00.082) 0:00:38.625 *********** 2025-06-22 19:56:29.298682 | orchestrator | 2025-06-22 19:56:29.298693 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-06-22 19:56:29.298704 | orchestrator | Sunday 22 June 2025 19:54:45 +0000 (0:00:00.057) 0:00:38.683 *********** 2025-06-22 19:56:29.298715 | orchestrator | ok: [testbed-node-3] 2025-06-22 19:56:29.298726 | orchestrator | ok: [testbed-node-4] 2025-06-22 19:56:29.298736 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:56:29.298747 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:56:29.298758 | orchestrator | ok: [testbed-node-5] 2025-06-22 19:56:29.298769 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:56:29.298779 | orchestrator | 2025-06-22 19:56:29.298790 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-06-22 19:56:29.298801 | orchestrator | Sunday 22 June 2025 19:54:47 +0000 (0:00:01.833) 0:00:40.516 *********** 2025-06-22 19:56:29.298812 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:56:29.298829 | orchestrator | changed: [testbed-node-4] 2025-06-22 19:56:29.298848 | orchestrator | changed: [testbed-node-3] 2025-06-22 19:56:29.298867 | orchestrator | changed: [testbed-node-5] 2025-06-22 19:56:29.298878 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:56:29.298889 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:56:29.298900 | orchestrator | 2025-06-22 19:56:29.298910 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-06-22 19:56:29.298921 | orchestrator | 2025-06-22 19:56:29.298936 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-06-22 19:56:29.298954 | orchestrator | Sunday 22 June 2025 19:55:14 +0000 (0:00:26.849) 0:01:07.366 *********** 2025-06-22 19:56:29.298972 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:56:29.298990 | orchestrator | 2025-06-22 19:56:29.299008 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-06-22 19:56:29.299027 | orchestrator | Sunday 22 June 2025 19:55:15 +0000 (0:00:00.539) 0:01:07.905 *********** 2025-06-22 19:56:29.299045 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:56:29.299058 | orchestrator | 2025-06-22 19:56:29.299076 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-06-22 19:56:29.299114 | orchestrator | Sunday 22 June 2025 19:55:15 +0000 (0:00:00.567) 0:01:08.473 *********** 2025-06-22 19:56:29.299126 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:56:29.299137 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:56:29.299148 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:56:29.299159 | orchestrator | 2025-06-22 19:56:29.299170 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-06-22 19:56:29.299181 | orchestrator | Sunday 22 June 2025 19:55:16 +0000 (0:00:00.744) 0:01:09.217 *********** 2025-06-22 19:56:29.299192 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:56:29.299202 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:56:29.299231 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:56:29.299242 | orchestrator | 2025-06-22 19:56:29.299253 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-06-22 19:56:29.299264 | orchestrator | Sunday 22 June 2025 19:55:16 +0000 (0:00:00.267) 0:01:09.485 *********** 2025-06-22 19:56:29.299275 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:56:29.299286 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:56:29.299302 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:56:29.299355 | orchestrator | 2025-06-22 19:56:29.299367 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-06-22 19:56:29.299378 | orchestrator | Sunday 22 June 2025 19:55:17 +0000 (0:00:00.300) 0:01:09.786 *********** 2025-06-22 19:56:29.299389 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:56:29.299400 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:56:29.299411 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:56:29.299421 | orchestrator | 2025-06-22 19:56:29.299433 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-06-22 19:56:29.299443 | orchestrator | Sunday 22 June 2025 19:55:17 +0000 (0:00:00.408) 0:01:10.194 *********** 2025-06-22 19:56:29.299454 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:56:29.299465 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:56:29.299475 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:56:29.299486 | orchestrator | 2025-06-22 19:56:29.299497 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-06-22 19:56:29.299508 | orchestrator | Sunday 22 June 2025 19:55:17 +0000 (0:00:00.319) 0:01:10.513 *********** 2025-06-22 19:56:29.299519 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:29.299530 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:29.299540 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:29.299551 | orchestrator | 2025-06-22 19:56:29.299562 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-06-22 19:56:29.299573 | orchestrator | Sunday 22 June 2025 19:55:18 +0000 (0:00:00.244) 0:01:10.757 *********** 2025-06-22 19:56:29.299584 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:29.299594 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:29.299605 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:29.299616 | orchestrator | 2025-06-22 19:56:29.299627 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-06-22 19:56:29.299638 | orchestrator | Sunday 22 June 2025 19:55:18 +0000 (0:00:00.247) 0:01:11.005 *********** 2025-06-22 19:56:29.299648 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:29.299659 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:29.299670 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:29.299681 | orchestrator | 2025-06-22 19:56:29.299692 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-06-22 19:56:29.299703 | orchestrator | Sunday 22 June 2025 19:55:18 +0000 (0:00:00.380) 0:01:11.386 *********** 2025-06-22 19:56:29.299714 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:29.299724 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:29.299735 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:29.299746 | orchestrator | 2025-06-22 19:56:29.299757 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-06-22 19:56:29.299767 | orchestrator | Sunday 22 June 2025 19:55:18 +0000 (0:00:00.266) 0:01:11.653 *********** 2025-06-22 19:56:29.299778 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:29.299789 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:29.299800 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:29.299811 | orchestrator | 2025-06-22 19:56:29.299821 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-06-22 19:56:29.299832 | orchestrator | Sunday 22 June 2025 19:55:19 +0000 (0:00:00.367) 0:01:12.020 *********** 2025-06-22 19:56:29.299843 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:29.299854 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:29.299879 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:29.299897 | orchestrator | 2025-06-22 19:56:29.299909 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-06-22 19:56:29.299920 | orchestrator | Sunday 22 June 2025 19:55:19 +0000 (0:00:00.424) 0:01:12.445 *********** 2025-06-22 19:56:29.299930 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:29.299941 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:29.299952 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:29.299962 | orchestrator | 2025-06-22 19:56:29.299973 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-06-22 19:56:29.299984 | orchestrator | Sunday 22 June 2025 19:55:20 +0000 (0:00:00.554) 0:01:13.000 *********** 2025-06-22 19:56:29.299995 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:29.300006 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:29.300016 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:29.300027 | orchestrator | 2025-06-22 19:56:29.300038 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-06-22 19:56:29.300049 | orchestrator | Sunday 22 June 2025 19:55:20 +0000 (0:00:00.348) 0:01:13.349 *********** 2025-06-22 19:56:29.300060 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:29.300070 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:29.300081 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:29.300092 | orchestrator | 2025-06-22 19:56:29.300103 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-06-22 19:56:29.300114 | orchestrator | Sunday 22 June 2025 19:55:20 +0000 (0:00:00.307) 0:01:13.656 *********** 2025-06-22 19:56:29.300125 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:29.300135 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:29.300146 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:29.300157 | orchestrator | 2025-06-22 19:56:29.300175 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-06-22 19:56:29.300187 | orchestrator | Sunday 22 June 2025 19:55:21 +0000 (0:00:00.627) 0:01:14.284 *********** 2025-06-22 19:56:29.300198 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:29.300209 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:29.300220 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:29.300230 | orchestrator | 2025-06-22 19:56:29.300241 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-06-22 19:56:29.300252 | orchestrator | Sunday 22 June 2025 19:55:22 +0000 (0:00:00.500) 0:01:14.784 *********** 2025-06-22 19:56:29.300263 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:29.300274 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:29.300284 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:29.300295 | orchestrator | 2025-06-22 19:56:29.300306 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-06-22 19:56:29.300333 | orchestrator | Sunday 22 June 2025 19:55:22 +0000 (0:00:00.260) 0:01:15.045 *********** 2025-06-22 19:56:29.300350 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:56:29.300361 | orchestrator | 2025-06-22 19:56:29.300372 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-06-22 19:56:29.300383 | orchestrator | Sunday 22 June 2025 19:55:22 +0000 (0:00:00.512) 0:01:15.557 *********** 2025-06-22 19:56:29.300394 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:56:29.300405 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:56:29.300415 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:56:29.300430 | orchestrator | 2025-06-22 19:56:29.300450 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-06-22 19:56:29.300469 | orchestrator | Sunday 22 June 2025 19:55:23 +0000 (0:00:00.743) 0:01:16.300 *********** 2025-06-22 19:56:29.300489 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:56:29.300510 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:56:29.300531 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:56:29.300551 | orchestrator | 2025-06-22 19:56:29.300573 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-06-22 19:56:29.300605 | orchestrator | Sunday 22 June 2025 19:55:24 +0000 (0:00:00.747) 0:01:17.048 *********** 2025-06-22 19:56:29.300617 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:29.300628 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:29.300639 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:29.300650 | orchestrator | 2025-06-22 19:56:29.300664 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-06-22 19:56:29.300682 | orchestrator | Sunday 22 June 2025 19:55:24 +0000 (0:00:00.349) 0:01:17.398 *********** 2025-06-22 19:56:29.300700 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:29.300718 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:29.300737 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:29.300755 | orchestrator | 2025-06-22 19:56:29.300773 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-06-22 19:56:29.300791 | orchestrator | Sunday 22 June 2025 19:55:25 +0000 (0:00:00.389) 0:01:17.787 *********** 2025-06-22 19:56:29.300809 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:29.300828 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:29.300847 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:29.300866 | orchestrator | 2025-06-22 19:56:29.300885 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-06-22 19:56:29.300903 | orchestrator | Sunday 22 June 2025 19:55:25 +0000 (0:00:00.611) 0:01:18.399 *********** 2025-06-22 19:56:29.300915 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:29.300925 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:29.300936 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:29.300947 | orchestrator | 2025-06-22 19:56:29.300957 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-06-22 19:56:29.300968 | orchestrator | Sunday 22 June 2025 19:55:26 +0000 (0:00:00.336) 0:01:18.735 *********** 2025-06-22 19:56:29.300979 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:29.300989 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:29.301000 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:29.301011 | orchestrator | 2025-06-22 19:56:29.301021 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-06-22 19:56:29.301032 | orchestrator | Sunday 22 June 2025 19:55:26 +0000 (0:00:00.321) 0:01:19.057 *********** 2025-06-22 19:56:29.301043 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:29.301054 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:29.301065 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:29.301075 | orchestrator | 2025-06-22 19:56:29.301086 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-06-22 19:56:29.301097 | orchestrator | Sunday 22 June 2025 19:55:26 +0000 (0:00:00.356) 0:01:19.414 *********** 2025-06-22 19:56:29.301109 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:56:29.301122 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:56:29.301144 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:56:29.301157 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:56:29.301188 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:56:29.301200 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:56:29.301211 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:56:29.301222 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:56:29.301234 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:56:29.301245 | orchestrator | 2025-06-22 19:56:29.301256 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-06-22 19:56:29.301266 | orchestrator | Sunday 22 June 2025 19:55:28 +0000 (0:00:01.677) 0:01:21.091 *********** 2025-06-22 19:56:29.301278 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:56:29.301289 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:56:29.301300 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:56:29.301360 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:56:29.301373 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:56:29.301385 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:56:29.301397 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:56:29.301408 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:56:29.301419 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:56:29.301433 | orchestrator | 2025-06-22 19:56:29.301451 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-06-22 19:56:29.301470 | orchestrator | Sunday 22 June 2025 19:55:33 +0000 (0:00:04.726) 0:01:25.819 *********** 2025-06-22 19:56:29.301489 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:56:29.301506 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:56:29.301524 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:56:29.301564 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:56:29.301670 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:56:29.301703 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:56:29.301715 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:56:29.301726 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:56:29.301737 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:56:29.301748 | orchestrator | 2025-06-22 19:56:29.301759 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-22 19:56:29.301771 | orchestrator | Sunday 22 June 2025 19:55:35 +0000 (0:00:02.436) 0:01:28.255 *********** 2025-06-22 19:56:29.301782 | orchestrator | 2025-06-22 19:56:29.301792 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-22 19:56:29.301803 | orchestrator | Sunday 22 June 2025 19:55:35 +0000 (0:00:00.067) 0:01:28.323 *********** 2025-06-22 19:56:29.301814 | orchestrator | 2025-06-22 19:56:29.301824 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-22 19:56:29.301835 | orchestrator | Sunday 22 June 2025 19:55:35 +0000 (0:00:00.103) 0:01:28.426 *********** 2025-06-22 19:56:29.301846 | orchestrator | 2025-06-22 19:56:29.301856 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-06-22 19:56:29.301867 | orchestrator | Sunday 22 June 2025 19:55:35 +0000 (0:00:00.122) 0:01:28.549 *********** 2025-06-22 19:56:29.301878 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:56:29.301888 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:56:29.301900 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:56:29.301910 | orchestrator | 2025-06-22 19:56:29.301927 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-06-22 19:56:29.301938 | orchestrator | Sunday 22 June 2025 19:55:43 +0000 (0:00:07.475) 0:01:36.024 *********** 2025-06-22 19:56:29.301949 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:56:29.301959 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:56:29.301970 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:56:29.301981 | orchestrator | 2025-06-22 19:56:29.301992 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-06-22 19:56:29.302003 | orchestrator | Sunday 22 June 2025 19:55:46 +0000 (0:00:02.929) 0:01:38.953 *********** 2025-06-22 19:56:29.302013 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:56:29.302094 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:56:29.302105 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:56:29.302116 | orchestrator | 2025-06-22 19:56:29.302127 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-06-22 19:56:29.302138 | orchestrator | Sunday 22 June 2025 19:55:48 +0000 (0:00:02.461) 0:01:41.414 *********** 2025-06-22 19:56:29.302149 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:29.302160 | orchestrator | 2025-06-22 19:56:29.302171 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-06-22 19:56:29.302183 | orchestrator | Sunday 22 June 2025 19:55:48 +0000 (0:00:00.129) 0:01:41.544 *********** 2025-06-22 19:56:29.302194 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:56:29.302205 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:56:29.302216 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:56:29.302227 | orchestrator | 2025-06-22 19:56:29.302249 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-06-22 19:56:29.302260 | orchestrator | Sunday 22 June 2025 19:55:49 +0000 (0:00:00.789) 0:01:42.334 *********** 2025-06-22 19:56:29.302271 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:29.302282 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:29.302293 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:56:29.302304 | orchestrator | 2025-06-22 19:56:29.302345 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-06-22 19:56:29.302357 | orchestrator | Sunday 22 June 2025 19:55:50 +0000 (0:00:00.923) 0:01:43.257 *********** 2025-06-22 19:56:29.302371 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:56:29.302389 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:56:29.302409 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:56:29.302428 | orchestrator | 2025-06-22 19:56:29.302446 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-06-22 19:56:29.302464 | orchestrator | Sunday 22 June 2025 19:55:51 +0000 (0:00:00.750) 0:01:44.008 *********** 2025-06-22 19:56:29.302483 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:29.302502 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:29.302521 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:56:29.302539 | orchestrator | 2025-06-22 19:56:29.302566 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-06-22 19:56:29.302585 | orchestrator | Sunday 22 June 2025 19:55:51 +0000 (0:00:00.662) 0:01:44.670 *********** 2025-06-22 19:56:29.302604 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:56:29.302623 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:56:29.302641 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:56:29.302653 | orchestrator | 2025-06-22 19:56:29.302664 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-06-22 19:56:29.302675 | orchestrator | Sunday 22 June 2025 19:55:52 +0000 (0:00:00.952) 0:01:45.622 *********** 2025-06-22 19:56:29.302685 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:56:29.302696 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:56:29.302707 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:56:29.302718 | orchestrator | 2025-06-22 19:56:29.302729 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-06-22 19:56:29.302740 | orchestrator | Sunday 22 June 2025 19:55:54 +0000 (0:00:01.737) 0:01:47.360 *********** 2025-06-22 19:56:29.302751 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:56:29.302772 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:56:29.302783 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:56:29.302793 | orchestrator | 2025-06-22 19:56:29.302804 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-06-22 19:56:29.302815 | orchestrator | Sunday 22 June 2025 19:55:55 +0000 (0:00:00.369) 0:01:47.729 *********** 2025-06-22 19:56:29.302826 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:56:29.302839 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:56:29.302850 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:56:29.302862 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:56:29.302873 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:56:29.302884 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:56:29.302904 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:56:29.302921 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:56:29.302932 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:56:29.302950 | orchestrator | 2025-06-22 19:56:29.302961 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-06-22 19:56:29.302972 | orchestrator | Sunday 22 June 2025 19:55:56 +0000 (0:00:01.550) 0:01:49.280 *********** 2025-06-22 19:56:29.302984 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:56:29.302995 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:56:29.303006 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:56:29.303017 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:56:29.303028 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:56:29.303040 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:56:29.303064 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:56:29.303075 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:56:29.303092 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:56:29.303110 | orchestrator | 2025-06-22 19:56:29.303121 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-06-22 19:56:29.303132 | orchestrator | Sunday 22 June 2025 19:56:00 +0000 (0:00:04.384) 0:01:53.665 *********** 2025-06-22 19:56:29.303144 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:56:29.303155 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:56:29.303166 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:56:29.303178 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:56:29.303189 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:56:29.303200 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:56:29.303211 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:56:29.303229 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:56:29.303241 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 19:56:29.303258 | orchestrator | 2025-06-22 19:56:29.303273 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-22 19:56:29.303284 | orchestrator | Sunday 22 June 2025 19:56:04 +0000 (0:00:03.075) 0:01:56.740 *********** 2025-06-22 19:56:29.303295 | orchestrator | 2025-06-22 19:56:29.303306 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-22 19:56:29.303402 | orchestrator | Sunday 22 June 2025 19:56:04 +0000 (0:00:00.064) 0:01:56.805 *********** 2025-06-22 19:56:29.303422 | orchestrator | 2025-06-22 19:56:29.303440 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-06-22 19:56:29.303459 | orchestrator | Sunday 22 June 2025 19:56:04 +0000 (0:00:00.064) 0:01:56.870 *********** 2025-06-22 19:56:29.303470 | orchestrator | 2025-06-22 19:56:29.303481 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-06-22 19:56:29.303492 | orchestrator | Sunday 22 June 2025 19:56:04 +0000 (0:00:00.065) 0:01:56.935 *********** 2025-06-22 19:56:29.303502 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:56:29.303513 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:56:29.303524 | orchestrator | 2025-06-22 19:56:29.303535 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-06-22 19:56:29.303546 | orchestrator | Sunday 22 June 2025 19:56:10 +0000 (0:00:06.201) 0:02:03.136 *********** 2025-06-22 19:56:29.303556 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:56:29.303567 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:56:29.303578 | orchestrator | 2025-06-22 19:56:29.303588 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-06-22 19:56:29.303599 | orchestrator | Sunday 22 June 2025 19:56:16 +0000 (0:00:06.253) 0:02:09.390 *********** 2025-06-22 19:56:29.303610 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:56:29.303621 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:56:29.303631 | orchestrator | 2025-06-22 19:56:29.303642 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-06-22 19:56:29.303653 | orchestrator | Sunday 22 June 2025 19:56:23 +0000 (0:00:06.473) 0:02:15.863 *********** 2025-06-22 19:56:29.303664 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:56:29.303674 | orchestrator | 2025-06-22 19:56:29.303685 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-06-22 19:56:29.303696 | orchestrator | Sunday 22 June 2025 19:56:23 +0000 (0:00:00.145) 0:02:16.009 *********** 2025-06-22 19:56:29.303707 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:56:29.303716 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:56:29.303726 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:56:29.303735 | orchestrator | 2025-06-22 19:56:29.303745 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-06-22 19:56:29.303755 | orchestrator | Sunday 22 June 2025 19:56:24 +0000 (0:00:01.005) 0:02:17.015 *********** 2025-06-22 19:56:29.303764 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:29.303774 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:29.303783 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:56:29.303793 | orchestrator | 2025-06-22 19:56:29.303802 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-06-22 19:56:29.303812 | orchestrator | Sunday 22 June 2025 19:56:24 +0000 (0:00:00.662) 0:02:17.677 *********** 2025-06-22 19:56:29.303822 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:56:29.303831 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:56:29.303841 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:56:29.303850 | orchestrator | 2025-06-22 19:56:29.303860 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-06-22 19:56:29.303870 | orchestrator | Sunday 22 June 2025 19:56:25 +0000 (0:00:00.763) 0:02:18.440 *********** 2025-06-22 19:56:29.303887 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:56:29.303897 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:56:29.303906 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:56:29.303916 | orchestrator | 2025-06-22 19:56:29.303925 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-06-22 19:56:29.303935 | orchestrator | Sunday 22 June 2025 19:56:26 +0000 (0:00:00.686) 0:02:19.127 *********** 2025-06-22 19:56:29.303944 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:56:29.303954 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:56:29.303963 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:56:29.303973 | orchestrator | 2025-06-22 19:56:29.303982 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-06-22 19:56:29.303992 | orchestrator | Sunday 22 June 2025 19:56:27 +0000 (0:00:01.104) 0:02:20.231 *********** 2025-06-22 19:56:29.304001 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:56:29.304011 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:56:29.304020 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:56:29.304029 | orchestrator | 2025-06-22 19:56:29.304039 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:56:29.304049 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-06-22 19:56:29.304059 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-06-22 19:56:29.304077 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-06-22 19:56:29.304087 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:56:29.304097 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:56:29.304107 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 19:56:29.304117 | orchestrator | 2025-06-22 19:56:29.304126 | orchestrator | 2025-06-22 19:56:29.304136 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:56:29.304152 | orchestrator | Sunday 22 June 2025 19:56:28 +0000 (0:00:01.056) 0:02:21.287 *********** 2025-06-22 19:56:29.304162 | orchestrator | =============================================================================== 2025-06-22 19:56:29.304171 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 26.85s 2025-06-22 19:56:29.304181 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 19.08s 2025-06-22 19:56:29.304190 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 13.68s 2025-06-22 19:56:29.304200 | orchestrator | ovn-db : Restart ovn-sb-db container ------------------------------------ 9.18s 2025-06-22 19:56:29.304209 | orchestrator | ovn-db : Restart ovn-northd container ----------------------------------- 8.93s 2025-06-22 19:56:29.304218 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.73s 2025-06-22 19:56:29.304228 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.38s 2025-06-22 19:56:29.304237 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.08s 2025-06-22 19:56:29.304247 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.98s 2025-06-22 19:56:29.304256 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 2.80s 2025-06-22 19:56:29.304266 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 2.67s 2025-06-22 19:56:29.304275 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.44s 2025-06-22 19:56:29.304285 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 2.12s 2025-06-22 19:56:29.304300 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 2.10s 2025-06-22 19:56:29.304330 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.99s 2025-06-22 19:56:29.304341 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.83s 2025-06-22 19:56:29.304351 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.77s 2025-06-22 19:56:29.304360 | orchestrator | ovn-db : Wait for ovn-sb-db --------------------------------------------- 1.74s 2025-06-22 19:56:29.304370 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.68s 2025-06-22 19:56:29.304379 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.55s 2025-06-22 19:56:29.304389 | orchestrator | 2025-06-22 19:56:29 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:56:29.304399 | orchestrator | 2025-06-22 19:56:29 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:56:29.304408 | orchestrator | 2025-06-22 19:56:29 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:56:32.335002 | orchestrator | 2025-06-22 19:56:32 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:56:32.338577 | orchestrator | 2025-06-22 19:56:32 | INFO  | Task 21fb0fa9-2cae-49f8-9a8c-0c6196331088 is in state STARTED 2025-06-22 19:56:32.339630 | orchestrator | 2025-06-22 19:56:32 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:56:32.339659 | orchestrator | 2025-06-22 19:56:32 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:56:35.386434 | orchestrator | 2025-06-22 19:56:35 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:56:35.388166 | orchestrator | 2025-06-22 19:56:35 | INFO  | Task 21fb0fa9-2cae-49f8-9a8c-0c6196331088 is in state STARTED 2025-06-22 19:56:35.393559 | orchestrator | 2025-06-22 19:56:35 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:56:35.393587 | orchestrator | 2025-06-22 19:56:35 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:56:38.460245 | orchestrator | 2025-06-22 19:56:38 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:56:38.460402 | orchestrator | 2025-06-22 19:56:38 | INFO  | Task 21fb0fa9-2cae-49f8-9a8c-0c6196331088 is in state STARTED 2025-06-22 19:56:38.463429 | orchestrator | 2025-06-22 19:56:38 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:56:38.463458 | orchestrator | 2025-06-22 19:56:38 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:56:41.509993 | orchestrator | 2025-06-22 19:56:41 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:56:41.511190 | orchestrator | 2025-06-22 19:56:41 | INFO  | Task 21fb0fa9-2cae-49f8-9a8c-0c6196331088 is in state STARTED 2025-06-22 19:56:41.513509 | orchestrator | 2025-06-22 19:56:41 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:56:41.514112 | orchestrator | 2025-06-22 19:56:41 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:56:44.566986 | orchestrator | 2025-06-22 19:56:44 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:56:44.568068 | orchestrator | 2025-06-22 19:56:44 | INFO  | Task 21fb0fa9-2cae-49f8-9a8c-0c6196331088 is in state STARTED 2025-06-22 19:56:44.569898 | orchestrator | 2025-06-22 19:56:44 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:56:44.569943 | orchestrator | 2025-06-22 19:56:44 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:56:47.612974 | orchestrator | 2025-06-22 19:56:47 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:56:47.615092 | orchestrator | 2025-06-22 19:56:47 | INFO  | Task 21fb0fa9-2cae-49f8-9a8c-0c6196331088 is in state STARTED 2025-06-22 19:56:47.616208 | orchestrator | 2025-06-22 19:56:47 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:56:47.616607 | orchestrator | 2025-06-22 19:56:47 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:56:50.650846 | orchestrator | 2025-06-22 19:56:50 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:56:50.651547 | orchestrator | 2025-06-22 19:56:50 | INFO  | Task 21fb0fa9-2cae-49f8-9a8c-0c6196331088 is in state SUCCESS 2025-06-22 19:56:50.653707 | orchestrator | 2025-06-22 19:56:50 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:56:50.654196 | orchestrator | 2025-06-22 19:56:50 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:56:53.690577 | orchestrator | 2025-06-22 19:56:53 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:56:53.692084 | orchestrator | 2025-06-22 19:56:53 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:56:53.692112 | orchestrator | 2025-06-22 19:56:53 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:56:56.732917 | orchestrator | 2025-06-22 19:56:56 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:56:56.733481 | orchestrator | 2025-06-22 19:56:56 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:56:56.733507 | orchestrator | 2025-06-22 19:56:56 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:56:59.767996 | orchestrator | 2025-06-22 19:56:59 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:56:59.768780 | orchestrator | 2025-06-22 19:56:59 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:56:59.768926 | orchestrator | 2025-06-22 19:56:59 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:57:02.809212 | orchestrator | 2025-06-22 19:57:02 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:57:02.809361 | orchestrator | 2025-06-22 19:57:02 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:57:02.809378 | orchestrator | 2025-06-22 19:57:02 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:57:05.839177 | orchestrator | 2025-06-22 19:57:05 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:57:05.841052 | orchestrator | 2025-06-22 19:57:05 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:57:05.841091 | orchestrator | 2025-06-22 19:57:05 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:57:08.875920 | orchestrator | 2025-06-22 19:57:08 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:57:08.877234 | orchestrator | 2025-06-22 19:57:08 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:57:08.877272 | orchestrator | 2025-06-22 19:57:08 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:57:11.906986 | orchestrator | 2025-06-22 19:57:11 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:57:11.908757 | orchestrator | 2025-06-22 19:57:11 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:57:11.908798 | orchestrator | 2025-06-22 19:57:11 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:57:14.956682 | orchestrator | 2025-06-22 19:57:14 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:57:14.959029 | orchestrator | 2025-06-22 19:57:14 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:57:14.959061 | orchestrator | 2025-06-22 19:57:14 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:57:18.004743 | orchestrator | 2025-06-22 19:57:18 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:57:18.006683 | orchestrator | 2025-06-22 19:57:18 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:57:18.007132 | orchestrator | 2025-06-22 19:57:18 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:57:21.056644 | orchestrator | 2025-06-22 19:57:21 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:57:21.058370 | orchestrator | 2025-06-22 19:57:21 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:57:21.058415 | orchestrator | 2025-06-22 19:57:21 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:57:24.107225 | orchestrator | 2025-06-22 19:57:24 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:57:24.108932 | orchestrator | 2025-06-22 19:57:24 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:57:24.108984 | orchestrator | 2025-06-22 19:57:24 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:57:27.162499 | orchestrator | 2025-06-22 19:57:27 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:57:27.162582 | orchestrator | 2025-06-22 19:57:27 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:57:27.162698 | orchestrator | 2025-06-22 19:57:27 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:57:30.226082 | orchestrator | 2025-06-22 19:57:30 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:57:30.227440 | orchestrator | 2025-06-22 19:57:30 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:57:30.227480 | orchestrator | 2025-06-22 19:57:30 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:57:33.271824 | orchestrator | 2025-06-22 19:57:33 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:57:33.273112 | orchestrator | 2025-06-22 19:57:33 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:57:33.273445 | orchestrator | 2025-06-22 19:57:33 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:57:36.331130 | orchestrator | 2025-06-22 19:57:36 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:57:36.334533 | orchestrator | 2025-06-22 19:57:36 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:57:36.334602 | orchestrator | 2025-06-22 19:57:36 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:57:39.386569 | orchestrator | 2025-06-22 19:57:39 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:57:39.388643 | orchestrator | 2025-06-22 19:57:39 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:57:39.389028 | orchestrator | 2025-06-22 19:57:39 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:57:42.427033 | orchestrator | 2025-06-22 19:57:42 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:57:42.428052 | orchestrator | 2025-06-22 19:57:42 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:57:42.428252 | orchestrator | 2025-06-22 19:57:42 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:57:45.475591 | orchestrator | 2025-06-22 19:57:45 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:57:45.476783 | orchestrator | 2025-06-22 19:57:45 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:57:45.476824 | orchestrator | 2025-06-22 19:57:45 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:57:48.528761 | orchestrator | 2025-06-22 19:57:48 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:57:48.530187 | orchestrator | 2025-06-22 19:57:48 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:57:48.530231 | orchestrator | 2025-06-22 19:57:48 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:57:51.581838 | orchestrator | 2025-06-22 19:57:51 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:57:51.583859 | orchestrator | 2025-06-22 19:57:51 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:57:51.583911 | orchestrator | 2025-06-22 19:57:51 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:57:54.621805 | orchestrator | 2025-06-22 19:57:54 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:57:54.622736 | orchestrator | 2025-06-22 19:57:54 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:57:54.622771 | orchestrator | 2025-06-22 19:57:54 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:57:57.665430 | orchestrator | 2025-06-22 19:57:57 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:57:57.667205 | orchestrator | 2025-06-22 19:57:57 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:57:57.667497 | orchestrator | 2025-06-22 19:57:57 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:58:00.715341 | orchestrator | 2025-06-22 19:58:00 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:58:00.717199 | orchestrator | 2025-06-22 19:58:00 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:58:00.717228 | orchestrator | 2025-06-22 19:58:00 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:58:03.750800 | orchestrator | 2025-06-22 19:58:03 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:58:03.752320 | orchestrator | 2025-06-22 19:58:03 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:58:03.752367 | orchestrator | 2025-06-22 19:58:03 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:58:06.800898 | orchestrator | 2025-06-22 19:58:06 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:58:06.803305 | orchestrator | 2025-06-22 19:58:06 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:58:06.804062 | orchestrator | 2025-06-22 19:58:06 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:58:09.840736 | orchestrator | 2025-06-22 19:58:09 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:58:09.842206 | orchestrator | 2025-06-22 19:58:09 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:58:09.842502 | orchestrator | 2025-06-22 19:58:09 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:58:12.889067 | orchestrator | 2025-06-22 19:58:12 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:58:12.890205 | orchestrator | 2025-06-22 19:58:12 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:58:12.890949 | orchestrator | 2025-06-22 19:58:12 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:58:15.943369 | orchestrator | 2025-06-22 19:58:15 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:58:15.945098 | orchestrator | 2025-06-22 19:58:15 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:58:15.945130 | orchestrator | 2025-06-22 19:58:15 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:58:18.981664 | orchestrator | 2025-06-22 19:58:18 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:58:18.982573 | orchestrator | 2025-06-22 19:58:18 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:58:18.982661 | orchestrator | 2025-06-22 19:58:18 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:58:22.026357 | orchestrator | 2025-06-22 19:58:22 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:58:22.027997 | orchestrator | 2025-06-22 19:58:22 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:58:22.028030 | orchestrator | 2025-06-22 19:58:22 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:58:25.075226 | orchestrator | 2025-06-22 19:58:25 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:58:25.077190 | orchestrator | 2025-06-22 19:58:25 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:58:25.077235 | orchestrator | 2025-06-22 19:58:25 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:58:28.126360 | orchestrator | 2025-06-22 19:58:28 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:58:28.126447 | orchestrator | 2025-06-22 19:58:28 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:58:28.126457 | orchestrator | 2025-06-22 19:58:28 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:58:31.189874 | orchestrator | 2025-06-22 19:58:31 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:58:31.191422 | orchestrator | 2025-06-22 19:58:31 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:58:31.192090 | orchestrator | 2025-06-22 19:58:31 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:58:34.252304 | orchestrator | 2025-06-22 19:58:34 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:58:34.252408 | orchestrator | 2025-06-22 19:58:34 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:58:34.252424 | orchestrator | 2025-06-22 19:58:34 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:58:37.298969 | orchestrator | 2025-06-22 19:58:37 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:58:37.300832 | orchestrator | 2025-06-22 19:58:37 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:58:37.300908 | orchestrator | 2025-06-22 19:58:37 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:58:40.344800 | orchestrator | 2025-06-22 19:58:40 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:58:40.346170 | orchestrator | 2025-06-22 19:58:40 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:58:40.346216 | orchestrator | 2025-06-22 19:58:40 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:58:43.383745 | orchestrator | 2025-06-22 19:58:43 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:58:43.384842 | orchestrator | 2025-06-22 19:58:43 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:58:43.385342 | orchestrator | 2025-06-22 19:58:43 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:58:46.443016 | orchestrator | 2025-06-22 19:58:46 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:58:46.445180 | orchestrator | 2025-06-22 19:58:46 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:58:46.445303 | orchestrator | 2025-06-22 19:58:46 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:58:49.497539 | orchestrator | 2025-06-22 19:58:49 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:58:49.498483 | orchestrator | 2025-06-22 19:58:49 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:58:49.498516 | orchestrator | 2025-06-22 19:58:49 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:58:52.545841 | orchestrator | 2025-06-22 19:58:52 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:58:52.546539 | orchestrator | 2025-06-22 19:58:52 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:58:52.546584 | orchestrator | 2025-06-22 19:58:52 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:58:55.592553 | orchestrator | 2025-06-22 19:58:55 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:58:55.595804 | orchestrator | 2025-06-22 19:58:55 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:58:55.595876 | orchestrator | 2025-06-22 19:58:55 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:58:58.643787 | orchestrator | 2025-06-22 19:58:58 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:58:58.645159 | orchestrator | 2025-06-22 19:58:58 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:58:58.645788 | orchestrator | 2025-06-22 19:58:58 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:59:01.686387 | orchestrator | 2025-06-22 19:59:01 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:59:01.686482 | orchestrator | 2025-06-22 19:59:01 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:59:01.686497 | orchestrator | 2025-06-22 19:59:01 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:59:04.737951 | orchestrator | 2025-06-22 19:59:04 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:59:04.738710 | orchestrator | 2025-06-22 19:59:04 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state STARTED 2025-06-22 19:59:04.738839 | orchestrator | 2025-06-22 19:59:04 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:59:07.799043 | orchestrator | 2025-06-22 19:59:07 | INFO  | Task c203ef30-6952-492e-b47a-51e32726175b is in state STARTED 2025-06-22 19:59:07.800853 | orchestrator | 2025-06-22 19:59:07 | INFO  | Task 38d54694-95db-4844-bb72-8967e27ceda1 is in state STARTED 2025-06-22 19:59:07.803311 | orchestrator | 2025-06-22 19:59:07 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:59:07.815268 | orchestrator | 2025-06-22 19:59:07 | INFO  | Task 1fd8f9b0-809b-47a4-a8ca-1cea0e026258 is in state SUCCESS 2025-06-22 19:59:07.817987 | orchestrator | 2025-06-22 19:59:07.818202 | orchestrator | None 2025-06-22 19:59:07.818648 | orchestrator | 2025-06-22 19:59:07.818713 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 19:59:07.818736 | orchestrator | 2025-06-22 19:59:07.818755 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 19:59:07.818773 | orchestrator | Sunday 22 June 2025 19:52:45 +0000 (0:00:00.671) 0:00:00.671 *********** 2025-06-22 19:59:07.818789 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:07.818811 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:07.818831 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:07.818849 | orchestrator | 2025-06-22 19:59:07.818869 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 19:59:07.818885 | orchestrator | Sunday 22 June 2025 19:52:46 +0000 (0:00:00.841) 0:00:01.512 *********** 2025-06-22 19:59:07.818903 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-06-22 19:59:07.818920 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-06-22 19:59:07.818938 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-06-22 19:59:07.818956 | orchestrator | 2025-06-22 19:59:07.818973 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-06-22 19:59:07.818991 | orchestrator | 2025-06-22 19:59:07.819009 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-06-22 19:59:07.819025 | orchestrator | Sunday 22 June 2025 19:52:48 +0000 (0:00:01.412) 0:00:02.925 *********** 2025-06-22 19:59:07.819043 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:59:07.819062 | orchestrator | 2025-06-22 19:59:07.819080 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-06-22 19:59:07.819096 | orchestrator | Sunday 22 June 2025 19:52:49 +0000 (0:00:01.125) 0:00:04.051 *********** 2025-06-22 19:59:07.819116 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:07.819132 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:07.819288 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:07.819306 | orchestrator | 2025-06-22 19:59:07.819323 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-06-22 19:59:07.819339 | orchestrator | Sunday 22 June 2025 19:52:50 +0000 (0:00:01.291) 0:00:05.342 *********** 2025-06-22 19:59:07.819355 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:59:07.819374 | orchestrator | 2025-06-22 19:59:07.819391 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-06-22 19:59:07.819407 | orchestrator | Sunday 22 June 2025 19:52:52 +0000 (0:00:01.912) 0:00:07.255 *********** 2025-06-22 19:59:07.819424 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:07.819440 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:07.819457 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:07.819474 | orchestrator | 2025-06-22 19:59:07.819491 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-06-22 19:59:07.819508 | orchestrator | Sunday 22 June 2025 19:52:53 +0000 (0:00:01.187) 0:00:08.442 *********** 2025-06-22 19:59:07.819523 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-06-22 19:59:07.819540 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-06-22 19:59:07.819557 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-06-22 19:59:07.819573 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-06-22 19:59:07.819591 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-06-22 19:59:07.819608 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-06-22 19:59:07.819625 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-06-22 19:59:07.819643 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-06-22 19:59:07.819678 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-06-22 19:59:07.819695 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-06-22 19:59:07.819711 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-06-22 19:59:07.819728 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-06-22 19:59:07.819745 | orchestrator | 2025-06-22 19:59:07.820139 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-06-22 19:59:07.820176 | orchestrator | Sunday 22 June 2025 19:52:58 +0000 (0:00:04.792) 0:00:13.234 *********** 2025-06-22 19:59:07.820257 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-06-22 19:59:07.820279 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-06-22 19:59:07.820297 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-06-22 19:59:07.820315 | orchestrator | 2025-06-22 19:59:07.820332 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-06-22 19:59:07.820350 | orchestrator | Sunday 22 June 2025 19:52:59 +0000 (0:00:01.095) 0:00:14.330 *********** 2025-06-22 19:59:07.820367 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-06-22 19:59:07.820387 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-06-22 19:59:07.820405 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-06-22 19:59:07.820424 | orchestrator | 2025-06-22 19:59:07.820443 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-06-22 19:59:07.820461 | orchestrator | Sunday 22 June 2025 19:53:01 +0000 (0:00:02.019) 0:00:16.349 *********** 2025-06-22 19:59:07.820480 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-06-22 19:59:07.820499 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:07.820554 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-06-22 19:59:07.820575 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:07.820683 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-06-22 19:59:07.820700 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:07.820716 | orchestrator | 2025-06-22 19:59:07.820731 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-06-22 19:59:07.820748 | orchestrator | Sunday 22 June 2025 19:53:02 +0000 (0:00:01.231) 0:00:17.581 *********** 2025-06-22 19:59:07.820769 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-06-22 19:59:07.820826 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-06-22 19:59:07.821016 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-06-22 19:59:07.821051 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-22 19:59:07.821068 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-22 19:59:07.821103 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-22 19:59:07.821117 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-22 19:59:07.821133 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-22 19:59:07.821147 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-22 19:59:07.821171 | orchestrator | 2025-06-22 19:59:07.821186 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-06-22 19:59:07.821200 | orchestrator | Sunday 22 June 2025 19:53:05 +0000 (0:00:02.624) 0:00:20.205 *********** 2025-06-22 19:59:07.821214 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:07.821260 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:07.821274 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:07.821288 | orchestrator | 2025-06-22 19:59:07.821301 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-06-22 19:59:07.821314 | orchestrator | Sunday 22 June 2025 19:53:06 +0000 (0:00:00.975) 0:00:21.180 *********** 2025-06-22 19:59:07.821329 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-06-22 19:59:07.821342 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-06-22 19:59:07.821355 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-06-22 19:59:07.821367 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-06-22 19:59:07.821380 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-06-22 19:59:07.821393 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-06-22 19:59:07.821405 | orchestrator | 2025-06-22 19:59:07.821419 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-06-22 19:59:07.821433 | orchestrator | Sunday 22 June 2025 19:53:09 +0000 (0:00:03.159) 0:00:24.339 *********** 2025-06-22 19:59:07.821689 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:07.821713 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:07.821727 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:07.821742 | orchestrator | 2025-06-22 19:59:07.821758 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-06-22 19:59:07.821770 | orchestrator | Sunday 22 June 2025 19:53:11 +0000 (0:00:01.686) 0:00:26.026 *********** 2025-06-22 19:59:07.821785 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:07.821799 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:07.821813 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:07.821828 | orchestrator | 2025-06-22 19:59:07.821872 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-06-22 19:59:07.821887 | orchestrator | Sunday 22 June 2025 19:53:12 +0000 (0:00:01.600) 0:00:27.630 *********** 2025-06-22 19:59:07.821902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-22 19:59:07.821943 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 19:59:07.821961 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 19:59:07.821993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-22 19:59:07.822008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250530', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__828b3df84a2f66073e7a9f702383fc700b559547', '__omit_place_holder__828b3df84a2f66073e7a9f702383fc700b559547'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-22 19:59:07.822073 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:07.822089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 19:59:07.822149 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 19:59:07.822185 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250530', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__828b3df84a2f66073e7a9f702383fc700b559547', '__omit_place_holder__828b3df84a2f66073e7a9f702383fc700b559547'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-22 19:59:07.822200 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:07.822239 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-22 19:59:07.822265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 19:59:07.822280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 19:59:07.822294 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250530', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__828b3df84a2f66073e7a9f702383fc700b559547', '__omit_place_holder__828b3df84a2f66073e7a9f702383fc700b559547'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-22 19:59:07.822309 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:07.822322 | orchestrator | 2025-06-22 19:59:07.822454 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-06-22 19:59:07.822472 | orchestrator | Sunday 22 June 2025 19:53:14 +0000 (0:00:01.307) 0:00:28.937 *********** 2025-06-22 19:59:07.822486 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-06-22 19:59:07.822580 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-06-22 19:59:07.822598 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-06-22 19:59:07.822625 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-22 19:59:07.822677 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 19:59:07.822695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250530', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__828b3df84a2f66073e7a9f702383fc700b559547', '__omit_place_holder__828b3df84a2f66073e7a9f702383fc700b559547'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-22 19:59:07.822708 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-22 19:59:07.822722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 19:59:07.822747 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250530', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__828b3df84a2f66073e7a9f702383fc700b559547', '__omit_place_holder__828b3df84a2f66073e7a9f702383fc700b559547'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-22 19:59:07.822774 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-22 19:59:07.822789 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 19:59:07.822882 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250530', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__828b3df84a2f66073e7a9f702383fc700b559547', '__omit_place_holder__828b3df84a2f66073e7a9f702383fc700b559547'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-06-22 19:59:07.822904 | orchestrator | 2025-06-22 19:59:07.822918 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-06-22 19:59:07.822932 | orchestrator | Sunday 22 June 2025 19:53:18 +0000 (0:00:03.971) 0:00:32.909 *********** 2025-06-22 19:59:07.822946 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-06-22 19:59:07.823045 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-06-22 19:59:07.823082 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-06-22 19:59:07.823110 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-22 19:59:07.823126 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-22 19:59:07.823140 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-22 19:59:07.823153 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-22 19:59:07.823167 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-22 19:59:07.823181 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-22 19:59:07.823270 | orchestrator | 2025-06-22 19:59:07.823288 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-06-22 19:59:07.823301 | orchestrator | Sunday 22 June 2025 19:53:21 +0000 (0:00:03.333) 0:00:36.242 *********** 2025-06-22 19:59:07.823316 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-06-22 19:59:07.826173 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-06-22 19:59:07.826343 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-06-22 19:59:07.826359 | orchestrator | 2025-06-22 19:59:07.826372 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-06-22 19:59:07.826384 | orchestrator | Sunday 22 June 2025 19:53:23 +0000 (0:00:01.845) 0:00:38.088 *********** 2025-06-22 19:59:07.826395 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-06-22 19:59:07.826407 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-06-22 19:59:07.826418 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-06-22 19:59:07.826429 | orchestrator | 2025-06-22 19:59:07.826440 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-06-22 19:59:07.826451 | orchestrator | Sunday 22 June 2025 19:53:29 +0000 (0:00:05.904) 0:00:43.992 *********** 2025-06-22 19:59:07.826463 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:07.826474 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:07.826485 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:07.826500 | orchestrator | 2025-06-22 19:59:07.826519 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-06-22 19:59:07.826537 | orchestrator | Sunday 22 June 2025 19:53:30 +0000 (0:00:01.300) 0:00:45.292 *********** 2025-06-22 19:59:07.826556 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-06-22 19:59:07.826578 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-06-22 19:59:07.826596 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-06-22 19:59:07.826612 | orchestrator | 2025-06-22 19:59:07.826624 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-06-22 19:59:07.826642 | orchestrator | Sunday 22 June 2025 19:53:32 +0000 (0:00:02.184) 0:00:47.477 *********** 2025-06-22 19:59:07.826659 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-06-22 19:59:07.826678 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-06-22 19:59:07.826697 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-06-22 19:59:07.826716 | orchestrator | 2025-06-22 19:59:07.826735 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-06-22 19:59:07.826748 | orchestrator | Sunday 22 June 2025 19:53:34 +0000 (0:00:01.941) 0:00:49.419 *********** 2025-06-22 19:59:07.826760 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-06-22 19:59:07.826771 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-06-22 19:59:07.826782 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-06-22 19:59:07.826864 | orchestrator | 2025-06-22 19:59:07.826885 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-06-22 19:59:07.826905 | orchestrator | Sunday 22 June 2025 19:53:36 +0000 (0:00:01.583) 0:00:51.003 *********** 2025-06-22 19:59:07.826924 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-06-22 19:59:07.826966 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-06-22 19:59:07.826984 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-06-22 19:59:07.826995 | orchestrator | 2025-06-22 19:59:07.827008 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-06-22 19:59:07.827079 | orchestrator | Sunday 22 June 2025 19:53:37 +0000 (0:00:01.614) 0:00:52.617 *********** 2025-06-22 19:59:07.827100 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:59:07.827118 | orchestrator | 2025-06-22 19:59:07.827131 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-06-22 19:59:07.827142 | orchestrator | Sunday 22 June 2025 19:53:38 +0000 (0:00:00.877) 0:00:53.494 *********** 2025-06-22 19:59:07.827159 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-06-22 19:59:07.827239 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-06-22 19:59:07.827257 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-06-22 19:59:07.827269 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-22 19:59:07.827281 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-22 19:59:07.827306 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-22 19:59:07.827326 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-22 19:59:07.827347 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-22 19:59:07.827386 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-22 19:59:07.827408 | orchestrator | 2025-06-22 19:59:07.827428 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-06-22 19:59:07.827448 | orchestrator | Sunday 22 June 2025 19:53:42 +0000 (0:00:03.704) 0:00:57.198 *********** 2025-06-22 19:59:07.827468 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-22 19:59:07.827481 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 19:59:07.827502 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 19:59:07.827567 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:07.827587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-22 19:59:07.827608 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 19:59:07.827645 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 19:59:07.827659 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:07.827670 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-22 19:59:07.827682 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 19:59:07.827694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 19:59:07.827714 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:07.827726 | orchestrator | 2025-06-22 19:59:07.827737 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-06-22 19:59:07.827748 | orchestrator | Sunday 22 June 2025 19:53:43 +0000 (0:00:00.623) 0:00:57.822 *********** 2025-06-22 19:59:07.827760 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-22 19:59:07.827772 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 19:59:07.827795 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 19:59:07.827807 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:07.827819 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-22 19:59:07.827830 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 19:59:07.827848 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 19:59:07.827860 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:07.827872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-22 19:59:07.827883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 19:59:07.827895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 19:59:07.827906 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:07.827917 | orchestrator | 2025-06-22 19:59:07.827928 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-06-22 19:59:07.827940 | orchestrator | Sunday 22 June 2025 19:53:44 +0000 (0:00:01.135) 0:00:58.958 *********** 2025-06-22 19:59:07.827963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-22 19:59:07.827976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 19:59:07.827999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 19:59:07.828011 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:07.828022 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-22 19:59:07.828034 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 19:59:07.828046 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 19:59:07.828057 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:07.828080 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-22 19:59:07.828092 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 19:59:07.828109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 19:59:07.828121 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:07.828132 | orchestrator | 2025-06-22 19:59:07.828143 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-06-22 19:59:07.828154 | orchestrator | Sunday 22 June 2025 19:53:44 +0000 (0:00:00.638) 0:00:59.596 *********** 2025-06-22 19:59:07.828166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-22 19:59:07.828177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 19:59:07.828189 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 19:59:07.828201 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:07.828212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-22 19:59:07.828264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-22 19:59:07.828284 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 19:59:07.828296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 19:59:07.828307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 19:59:07.828319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 19:59:07.828330 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:07.828341 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:07.828353 | orchestrator | 2025-06-22 19:59:07.828364 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-06-22 19:59:07.828375 | orchestrator | Sunday 22 June 2025 19:53:45 +0000 (0:00:00.554) 0:01:00.151 *********** 2025-06-22 19:59:07.828386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-22 19:59:07.828411 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 19:59:07.828429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 19:59:07.828441 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:07.828452 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-22 19:59:07.828464 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 19:59:07.828476 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 19:59:07.828487 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:07.828498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-22 19:59:07.828520 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 19:59:07.828539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 19:59:07.828550 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:07.828561 | orchestrator | 2025-06-22 19:59:07.828572 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2025-06-22 19:59:07.828584 | orchestrator | Sunday 22 June 2025 19:53:46 +0000 (0:00:01.480) 0:01:01.631 *********** 2025-06-22 19:59:07.828595 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-22 19:59:07.828607 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 19:59:07.828618 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 19:59:07.828630 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:07.828641 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-22 19:59:07.828653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 19:59:07.828681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 19:59:07.828693 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-22 19:59:07.828705 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:07.828716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 19:59:07.828728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 19:59:07.828739 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:07.828750 | orchestrator | 2025-06-22 19:59:07.828761 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2025-06-22 19:59:07.828772 | orchestrator | Sunday 22 June 2025 19:53:47 +0000 (0:00:00.580) 0:01:02.212 *********** 2025-06-22 19:59:07.828784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-22 19:59:07.828795 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 19:59:07.828826 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 19:59:07.828838 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:07.828849 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-22 19:59:07.828861 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 19:59:07.828873 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 19:59:07.828884 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:07.828896 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-22 19:59:07.828907 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 19:59:07.828925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 19:59:07.828936 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:07.828947 | orchestrator | 2025-06-22 19:59:07.828958 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2025-06-22 19:59:07.828980 | orchestrator | Sunday 22 June 2025 19:53:48 +0000 (0:00:00.649) 0:01:02.861 *********** 2025-06-22 19:59:07.828992 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-06-22 19:59:07.829003 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 19:59:07.829021 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 19:59:07.829041 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:07.829059 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-06-22 19:59:07.829077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 19:59:07.829110 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 19:59:07.829128 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:07.829163 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-06-22 19:59:07.829184 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-06-22 19:59:07.829204 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-06-22 19:59:07.829254 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:07.829269 | orchestrator | 2025-06-22 19:59:07.829288 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-06-22 19:59:07.829301 | orchestrator | Sunday 22 June 2025 19:53:49 +0000 (0:00:01.437) 0:01:04.298 *********** 2025-06-22 19:59:07.829313 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-06-22 19:59:07.829324 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-06-22 19:59:07.829335 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-06-22 19:59:07.829346 | orchestrator | 2025-06-22 19:59:07.829357 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-06-22 19:59:07.829368 | orchestrator | Sunday 22 June 2025 19:53:51 +0000 (0:00:01.524) 0:01:05.823 *********** 2025-06-22 19:59:07.829379 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-06-22 19:59:07.829390 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-06-22 19:59:07.829410 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-06-22 19:59:07.829421 | orchestrator | 2025-06-22 19:59:07.829432 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-06-22 19:59:07.829443 | orchestrator | Sunday 22 June 2025 19:53:53 +0000 (0:00:01.968) 0:01:07.792 *********** 2025-06-22 19:59:07.829454 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-06-22 19:59:07.829465 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-06-22 19:59:07.829476 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-22 19:59:07.829487 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:07.829498 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-06-22 19:59:07.829509 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-22 19:59:07.829520 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:07.829531 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-22 19:59:07.829542 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:07.829553 | orchestrator | 2025-06-22 19:59:07.829564 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-06-22 19:59:07.829575 | orchestrator | Sunday 22 June 2025 19:53:55 +0000 (0:00:02.503) 0:01:10.296 *********** 2025-06-22 19:59:07.829601 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-06-22 19:59:07.829614 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-06-22 19:59:07.829626 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250530', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-06-22 19:59:07.829638 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-22 19:59:07.829657 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-22 19:59:07.829669 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250530', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-06-22 19:59:07.829680 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-22 19:59:07.829704 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-22 19:59:07.829716 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250530', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-06-22 19:59:07.829728 | orchestrator | 2025-06-22 19:59:07.829739 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-06-22 19:59:07.829750 | orchestrator | Sunday 22 June 2025 19:53:58 +0000 (0:00:03.052) 0:01:13.348 *********** 2025-06-22 19:59:07.829761 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:59:07.829772 | orchestrator | 2025-06-22 19:59:07.829783 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-06-22 19:59:07.829794 | orchestrator | Sunday 22 June 2025 19:53:59 +0000 (0:00:00.686) 0:01:14.035 *********** 2025-06-22 19:59:07.829806 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-06-22 19:59:07.829825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-22 19:59:07.829838 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.829849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.829872 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-06-22 19:59:07.829884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-22 19:59:07.829902 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.829914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.829926 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-06-22 19:59:07.829937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-22 19:59:07.829955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.829994 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.830013 | orchestrator | 2025-06-22 19:59:07.830066 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-06-22 19:59:07.830077 | orchestrator | Sunday 22 June 2025 19:54:04 +0000 (0:00:05.264) 0:01:19.299 *********** 2025-06-22 19:59:07.830089 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-06-22 19:59:07.830100 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-22 19:59:07.830112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.830123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.830135 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:07.830161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-06-22 19:59:07.830173 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-22 19:59:07.830191 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.830203 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.830254 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:07.830268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-06-22 19:59:07.830280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-06-22 19:59:07.830303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.830315 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250530', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.830333 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:07.830344 | orchestrator | 2025-06-22 19:59:07.830356 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-06-22 19:59:07.830367 | orchestrator | Sunday 22 June 2025 19:54:05 +0000 (0:00:01.318) 0:01:20.618 *********** 2025-06-22 19:59:07.830378 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-06-22 19:59:07.830390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-06-22 19:59:07.830402 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-06-22 19:59:07.830415 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:07.830425 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-06-22 19:59:07.830436 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:07.830448 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-06-22 19:59:07.830459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-06-22 19:59:07.830471 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:07.830482 | orchestrator | 2025-06-22 19:59:07.830493 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-06-22 19:59:07.830504 | orchestrator | Sunday 22 June 2025 19:54:07 +0000 (0:00:01.598) 0:01:22.216 *********** 2025-06-22 19:59:07.830515 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:07.830526 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:07.830536 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:07.830547 | orchestrator | 2025-06-22 19:59:07.830558 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-06-22 19:59:07.830569 | orchestrator | Sunday 22 June 2025 19:54:09 +0000 (0:00:01.585) 0:01:23.802 *********** 2025-06-22 19:59:07.830580 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:07.830591 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:07.830602 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:07.830613 | orchestrator | 2025-06-22 19:59:07.830624 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-06-22 19:59:07.830635 | orchestrator | Sunday 22 June 2025 19:54:11 +0000 (0:00:02.825) 0:01:26.628 *********** 2025-06-22 19:59:07.830646 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:59:07.830657 | orchestrator | 2025-06-22 19:59:07.830668 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-06-22 19:59:07.830679 | orchestrator | Sunday 22 June 2025 19:54:13 +0000 (0:00:01.335) 0:01:27.963 *********** 2025-06-22 19:59:07.830707 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-22 19:59:07.830727 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.830739 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.830751 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-22 19:59:07.830762 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.830786 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-22 19:59:07.830805 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.830817 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.830828 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.830840 | orchestrator | 2025-06-22 19:59:07.830851 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-06-22 19:59:07.830863 | orchestrator | Sunday 22 June 2025 19:54:19 +0000 (0:00:05.928) 0:01:33.891 *********** 2025-06-22 19:59:07.830875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-22 19:59:07.830886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.830934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.830947 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:07.830958 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-22 19:59:07.830970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.830982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.830993 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:07.831005 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-22 19:59:07.831047 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.831060 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.831072 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:07.831083 | orchestrator | 2025-06-22 19:59:07.831094 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-06-22 19:59:07.831106 | orchestrator | Sunday 22 June 2025 19:54:20 +0000 (0:00:01.188) 0:01:35.080 *********** 2025-06-22 19:59:07.831117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-22 19:59:07.831128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-22 19:59:07.831140 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:07.831152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-22 19:59:07.831163 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-22 19:59:07.831174 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:07.831185 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-22 19:59:07.831196 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-06-22 19:59:07.831207 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:07.831234 | orchestrator | 2025-06-22 19:59:07.831245 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-06-22 19:59:07.831257 | orchestrator | Sunday 22 June 2025 19:54:21 +0000 (0:00:01.135) 0:01:36.215 *********** 2025-06-22 19:59:07.831314 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:07.831326 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:07.831344 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:07.831355 | orchestrator | 2025-06-22 19:59:07.831366 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-06-22 19:59:07.831390 | orchestrator | Sunday 22 June 2025 19:54:23 +0000 (0:00:02.476) 0:01:38.692 *********** 2025-06-22 19:59:07.831401 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:07.831412 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:07.831423 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:07.831434 | orchestrator | 2025-06-22 19:59:07.831445 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-06-22 19:59:07.831456 | orchestrator | Sunday 22 June 2025 19:54:26 +0000 (0:00:02.196) 0:01:40.889 *********** 2025-06-22 19:59:07.831467 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:07.831478 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:07.831488 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:07.831499 | orchestrator | 2025-06-22 19:59:07.831510 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-06-22 19:59:07.831521 | orchestrator | Sunday 22 June 2025 19:54:26 +0000 (0:00:00.328) 0:01:41.217 *********** 2025-06-22 19:59:07.831532 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:59:07.831543 | orchestrator | 2025-06-22 19:59:07.831554 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-06-22 19:59:07.831565 | orchestrator | Sunday 22 June 2025 19:54:28 +0000 (0:00:01.530) 0:01:42.747 *********** 2025-06-22 19:59:07.831603 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-06-22 19:59:07.831618 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-06-22 19:59:07.831630 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-06-22 19:59:07.831651 | orchestrator | 2025-06-22 19:59:07.831662 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-06-22 19:59:07.831673 | orchestrator | Sunday 22 June 2025 19:54:31 +0000 (0:00:03.063) 0:01:45.811 *********** 2025-06-22 19:59:07.831684 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-06-22 19:59:07.831696 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:07.831707 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-06-22 19:59:07.831718 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:07.831753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-06-22 19:59:07.831766 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:07.831777 | orchestrator | 2025-06-22 19:59:07.831788 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-06-22 19:59:07.831799 | orchestrator | Sunday 22 June 2025 19:54:32 +0000 (0:00:01.165) 0:01:46.977 *********** 2025-06-22 19:59:07.831812 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-22 19:59:07.831825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-22 19:59:07.831845 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:07.831857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-22 19:59:07.831869 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-22 19:59:07.831881 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:07.831892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-22 19:59:07.831903 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-06-22 19:59:07.831915 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:07.831926 | orchestrator | 2025-06-22 19:59:07.831936 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-06-22 19:59:07.831960 | orchestrator | Sunday 22 June 2025 19:54:33 +0000 (0:00:01.437) 0:01:48.414 *********** 2025-06-22 19:59:07.831972 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:07.831983 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:07.831993 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:07.832004 | orchestrator | 2025-06-22 19:59:07.832015 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-06-22 19:59:07.832026 | orchestrator | Sunday 22 June 2025 19:54:34 +0000 (0:00:00.771) 0:01:49.186 *********** 2025-06-22 19:59:07.832037 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:07.832048 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:07.832059 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:07.832082 | orchestrator | 2025-06-22 19:59:07.832093 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-06-22 19:59:07.832129 | orchestrator | Sunday 22 June 2025 19:54:35 +0000 (0:00:01.225) 0:01:50.411 *********** 2025-06-22 19:59:07.832141 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:59:07.832153 | orchestrator | 2025-06-22 19:59:07.832164 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-06-22 19:59:07.832175 | orchestrator | Sunday 22 June 2025 19:54:36 +0000 (0:00:00.748) 0:01:51.159 *********** 2025-06-22 19:59:07.832186 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-22 19:59:07.832206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.832238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.832251 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.832292 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-22 19:59:07.832304 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.832324 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.832335 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.832347 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-22 19:59:07.832358 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.832394 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.832413 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.832425 | orchestrator | 2025-06-22 19:59:07.832436 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-06-22 19:59:07.832447 | orchestrator | Sunday 22 June 2025 19:54:39 +0000 (0:00:03.465) 0:01:54.625 *********** 2025-06-22 19:59:07.832459 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-22 19:59:07.832470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.832495 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.832531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.832550 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:07.832562 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-22 19:59:07.832573 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.832585 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.832597 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.832608 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:07.832656 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-22 19:59:07.832677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.832689 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.832701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.832728 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:07.832740 | orchestrator | 2025-06-22 19:59:07.832751 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-06-22 19:59:07.832762 | orchestrator | Sunday 22 June 2025 19:54:41 +0000 (0:00:01.147) 0:01:55.772 *********** 2025-06-22 19:59:07.832773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-22 19:59:07.832785 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-22 19:59:07.832796 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:07.832807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-22 19:59:07.832819 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-22 19:59:07.832837 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:07.832889 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-22 19:59:07.832903 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-06-22 19:59:07.832914 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:07.832925 | orchestrator | 2025-06-22 19:59:07.832936 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-06-22 19:59:07.832947 | orchestrator | Sunday 22 June 2025 19:54:41 +0000 (0:00:00.850) 0:01:56.623 *********** 2025-06-22 19:59:07.832958 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:07.832969 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:07.832980 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:07.833004 | orchestrator | 2025-06-22 19:59:07.833015 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-06-22 19:59:07.833026 | orchestrator | Sunday 22 June 2025 19:54:43 +0000 (0:00:01.322) 0:01:57.946 *********** 2025-06-22 19:59:07.833037 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:07.833048 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:07.833059 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:07.833070 | orchestrator | 2025-06-22 19:59:07.833081 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-06-22 19:59:07.833092 | orchestrator | Sunday 22 June 2025 19:54:45 +0000 (0:00:01.965) 0:01:59.911 *********** 2025-06-22 19:59:07.833103 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:07.833114 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:07.833124 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:07.833135 | orchestrator | 2025-06-22 19:59:07.833147 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-06-22 19:59:07.833157 | orchestrator | Sunday 22 June 2025 19:54:45 +0000 (0:00:00.396) 0:02:00.308 *********** 2025-06-22 19:59:07.833168 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:07.833179 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:07.833190 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:07.833200 | orchestrator | 2025-06-22 19:59:07.833212 | orchestrator | TASK [include_role : designate] ************************************************ 2025-06-22 19:59:07.833275 | orchestrator | Sunday 22 June 2025 19:54:45 +0000 (0:00:00.275) 0:02:00.584 *********** 2025-06-22 19:59:07.833287 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:59:07.833298 | orchestrator | 2025-06-22 19:59:07.833309 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-06-22 19:59:07.833320 | orchestrator | Sunday 22 June 2025 19:54:46 +0000 (0:00:00.728) 0:02:01.312 *********** 2025-06-22 19:59:07.833332 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-22 19:59:07.833352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-22 19:59:07.833364 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.833412 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.833427 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.833439 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.833451 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.833462 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-22 19:59:07.833481 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-22 19:59:07.833516 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-22 19:59:07.833528 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.833540 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.833552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-22 19:59:07.833570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.833581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.833619 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.833631 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.833643 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.833655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.833666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.833684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.833695 | orchestrator | 2025-06-22 19:59:07.833706 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-06-22 19:59:07.833717 | orchestrator | Sunday 22 June 2025 19:54:50 +0000 (0:00:04.008) 0:02:05.321 *********** 2025-06-22 19:59:07.833750 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-22 19:59:07.833763 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-22 19:59:07.833775 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.833795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.833815 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.833826 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.833866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-22 19:59:07.833879 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.833889 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-22 19:59:07.833899 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:07.833910 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.833927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.833937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.833958 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.833990 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.834001 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:07.834011 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-22 19:59:07.834047 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-22 19:59:07.834076 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.834087 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.834097 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.834124 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.834151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.834162 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:07.834172 | orchestrator | 2025-06-22 19:59:07.834182 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-06-22 19:59:07.834203 | orchestrator | Sunday 22 June 2025 19:54:51 +0000 (0:00:01.066) 0:02:06.387 *********** 2025-06-22 19:59:07.834213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-06-22 19:59:07.834248 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-06-22 19:59:07.834259 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:07.834269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-06-22 19:59:07.834279 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-06-22 19:59:07.834288 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:07.834298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-06-22 19:59:07.834308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-06-22 19:59:07.834318 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:07.834328 | orchestrator | 2025-06-22 19:59:07.834337 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-06-22 19:59:07.834347 | orchestrator | Sunday 22 June 2025 19:54:52 +0000 (0:00:01.025) 0:02:07.412 *********** 2025-06-22 19:59:07.834357 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:07.834367 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:07.834377 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:07.834422 | orchestrator | 2025-06-22 19:59:07.834432 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-06-22 19:59:07.834452 | orchestrator | Sunday 22 June 2025 19:54:54 +0000 (0:00:01.747) 0:02:09.160 *********** 2025-06-22 19:59:07.834462 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:07.834472 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:07.834482 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:07.834492 | orchestrator | 2025-06-22 19:59:07.834502 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-06-22 19:59:07.834512 | orchestrator | Sunday 22 June 2025 19:54:56 +0000 (0:00:01.958) 0:02:11.118 *********** 2025-06-22 19:59:07.834521 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:07.834531 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:07.834541 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:07.834550 | orchestrator | 2025-06-22 19:59:07.834560 | orchestrator | TASK [include_role : glance] *************************************************** 2025-06-22 19:59:07.834570 | orchestrator | Sunday 22 June 2025 19:54:56 +0000 (0:00:00.260) 0:02:11.379 *********** 2025-06-22 19:59:07.834580 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:59:07.834589 | orchestrator | 2025-06-22 19:59:07.834599 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-06-22 19:59:07.834609 | orchestrator | Sunday 22 June 2025 19:54:57 +0000 (0:00:00.824) 0:02:12.203 *********** 2025-06-22 19:59:07.834644 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-22 19:59:07.834664 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250530', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-22 19:59:07.834686 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-22 19:59:07.834706 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250530', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-22 19:59:07.834739 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-22 19:59:07.834761 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250530', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-22 19:59:07.834772 | orchestrator | 2025-06-22 19:59:07.834782 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-06-22 19:59:07.834792 | orchestrator | Sunday 22 June 2025 19:55:01 +0000 (0:00:03.783) 0:02:15.987 *********** 2025-06-22 19:59:07.834823 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-22 19:59:07.834842 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250530', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-22 19:59:07.834853 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:07.834864 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-22 19:59:07.834895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250530', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-22 19:59:07.834912 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:07.834923 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-22 19:59:07.834955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250530', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-06-22 19:59:07.834973 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:07.834983 | orchestrator | 2025-06-22 19:59:07.834992 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-06-22 19:59:07.835002 | orchestrator | Sunday 22 June 2025 19:55:03 +0000 (0:00:02.498) 0:02:18.485 *********** 2025-06-22 19:59:07.835013 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-22 19:59:07.835023 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-22 19:59:07.835047 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:07.835058 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-22 19:59:07.835077 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-22 19:59:07.835097 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:07.835107 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-22 19:59:07.835145 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-06-22 19:59:07.835157 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:07.835166 | orchestrator | 2025-06-22 19:59:07.835185 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-06-22 19:59:07.835195 | orchestrator | Sunday 22 June 2025 19:55:06 +0000 (0:00:02.781) 0:02:21.266 *********** 2025-06-22 19:59:07.835204 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:07.835243 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:07.835254 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:07.835264 | orchestrator | 2025-06-22 19:59:07.835274 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-06-22 19:59:07.835283 | orchestrator | Sunday 22 June 2025 19:55:07 +0000 (0:00:01.326) 0:02:22.593 *********** 2025-06-22 19:59:07.835309 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:07.835319 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:07.835329 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:07.835338 | orchestrator | 2025-06-22 19:59:07.835348 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-06-22 19:59:07.835358 | orchestrator | Sunday 22 June 2025 19:55:09 +0000 (0:00:01.774) 0:02:24.367 *********** 2025-06-22 19:59:07.835367 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:07.835377 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:07.835386 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:07.835396 | orchestrator | 2025-06-22 19:59:07.835406 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-06-22 19:59:07.835416 | orchestrator | Sunday 22 June 2025 19:55:09 +0000 (0:00:00.277) 0:02:24.644 *********** 2025-06-22 19:59:07.835425 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:59:07.835435 | orchestrator | 2025-06-22 19:59:07.835444 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-06-22 19:59:07.835454 | orchestrator | Sunday 22 June 2025 19:55:10 +0000 (0:00:00.770) 0:02:25.415 *********** 2025-06-22 19:59:07.835464 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-22 19:59:07.835475 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-22 19:59:07.835492 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-22 19:59:07.835512 | orchestrator | 2025-06-22 19:59:07.835522 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-06-22 19:59:07.835532 | orchestrator | Sunday 22 June 2025 19:55:13 +0000 (0:00:02.903) 0:02:28.318 *********** 2025-06-22 19:59:07.835563 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-22 19:59:07.835575 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:07.835585 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-22 19:59:07.835595 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:07.835605 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-22 19:59:07.835615 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:07.835625 | orchestrator | 2025-06-22 19:59:07.835635 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-06-22 19:59:07.835645 | orchestrator | Sunday 22 June 2025 19:55:13 +0000 (0:00:00.360) 0:02:28.679 *********** 2025-06-22 19:59:07.835655 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-06-22 19:59:07.835671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-06-22 19:59:07.835680 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-06-22 19:59:07.835690 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:07.835700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-06-22 19:59:07.835710 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:07.835719 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-06-22 19:59:07.835729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-06-22 19:59:07.835739 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:07.835749 | orchestrator | 2025-06-22 19:59:07.835758 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-06-22 19:59:07.835769 | orchestrator | Sunday 22 June 2025 19:55:14 +0000 (0:00:00.623) 0:02:29.302 *********** 2025-06-22 19:59:07.835778 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:07.835788 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:07.835798 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:07.835807 | orchestrator | 2025-06-22 19:59:07.835817 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-06-22 19:59:07.835827 | orchestrator | Sunday 22 June 2025 19:55:15 +0000 (0:00:01.415) 0:02:30.718 *********** 2025-06-22 19:59:07.835836 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:07.835846 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:07.835856 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:07.835865 | orchestrator | 2025-06-22 19:59:07.835895 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-06-22 19:59:07.835906 | orchestrator | Sunday 22 June 2025 19:55:17 +0000 (0:00:01.807) 0:02:32.526 *********** 2025-06-22 19:59:07.835915 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:07.835925 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:07.835943 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:07.835953 | orchestrator | 2025-06-22 19:59:07.835963 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-06-22 19:59:07.835973 | orchestrator | Sunday 22 June 2025 19:55:18 +0000 (0:00:00.285) 0:02:32.811 *********** 2025-06-22 19:59:07.835983 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:59:07.835992 | orchestrator | 2025-06-22 19:59:07.836002 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-06-22 19:59:07.836012 | orchestrator | Sunday 22 June 2025 19:55:18 +0000 (0:00:00.819) 0:02:33.631 *********** 2025-06-22 19:59:07.836023 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-22 19:59:07.836063 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-22 19:59:07.836077 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-22 19:59:07.836097 | orchestrator | 2025-06-22 19:59:07.836107 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-06-22 19:59:07.836117 | orchestrator | Sunday 22 June 2025 19:55:22 +0000 (0:00:04.031) 0:02:37.662 *********** 2025-06-22 19:59:07.836151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-22 19:59:07.836169 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:07.836180 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-22 19:59:07.836190 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:07.836239 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-22 19:59:07.836266 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:07.836276 | orchestrator | 2025-06-22 19:59:07.836286 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-06-22 19:59:07.836296 | orchestrator | Sunday 22 June 2025 19:55:23 +0000 (0:00:00.643) 0:02:38.305 *********** 2025-06-22 19:59:07.836306 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-22 19:59:07.836317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-22 19:59:07.836328 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-22 19:59:07.836338 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-22 19:59:07.836348 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-22 19:59:07.836358 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-22 19:59:07.836368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-06-22 19:59:07.836379 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:07.836478 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-22 19:59:07.836507 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-22 19:59:07.836524 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-06-22 19:59:07.836535 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:07.836545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-22 19:59:07.836565 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-22 19:59:07.836576 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-06-22 19:59:07.836586 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-06-22 19:59:07.836596 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-06-22 19:59:07.836606 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:07.836616 | orchestrator | 2025-06-22 19:59:07.836625 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-06-22 19:59:07.836635 | orchestrator | Sunday 22 June 2025 19:55:24 +0000 (0:00:01.247) 0:02:39.553 *********** 2025-06-22 19:59:07.836645 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:07.836655 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:07.836664 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:07.836674 | orchestrator | 2025-06-22 19:59:07.836684 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-06-22 19:59:07.836694 | orchestrator | Sunday 22 June 2025 19:55:26 +0000 (0:00:01.724) 0:02:41.278 *********** 2025-06-22 19:59:07.836703 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:07.836713 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:07.836722 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:07.836732 | orchestrator | 2025-06-22 19:59:07.836751 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-06-22 19:59:07.836762 | orchestrator | Sunday 22 June 2025 19:55:29 +0000 (0:00:02.489) 0:02:43.768 *********** 2025-06-22 19:59:07.836772 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:07.836781 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:07.836791 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:07.836800 | orchestrator | 2025-06-22 19:59:07.836810 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-06-22 19:59:07.836820 | orchestrator | Sunday 22 June 2025 19:55:29 +0000 (0:00:00.645) 0:02:44.413 *********** 2025-06-22 19:59:07.836830 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:07.836840 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:07.836849 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:07.836859 | orchestrator | 2025-06-22 19:59:07.836869 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-06-22 19:59:07.836878 | orchestrator | Sunday 22 June 2025 19:55:30 +0000 (0:00:00.487) 0:02:44.900 *********** 2025-06-22 19:59:07.836888 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:59:07.836904 | orchestrator | 2025-06-22 19:59:07.836914 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-06-22 19:59:07.836923 | orchestrator | Sunday 22 June 2025 19:55:31 +0000 (0:00:01.311) 0:02:46.212 *********** 2025-06-22 19:59:07.836957 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-22 19:59:07.836970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-22 19:59:07.836982 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-22 19:59:07.836993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-22 19:59:07.837004 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-22 19:59:07.837040 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-22 19:59:07.837052 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-22 19:59:07.837062 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-22 19:59:07.837073 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-22 19:59:07.837083 | orchestrator | 2025-06-22 19:59:07.837093 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-06-22 19:59:07.837103 | orchestrator | Sunday 22 June 2025 19:55:35 +0000 (0:00:04.054) 0:02:50.266 *********** 2025-06-22 19:59:07.837114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-22 19:59:07.837155 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-22 19:59:07.837166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-22 19:59:07.837176 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:07.837187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-22 19:59:07.837198 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-22 19:59:07.837267 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-22 19:59:07.837313 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-22 19:59:07.837326 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-22 19:59:07.837336 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:07.837346 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-22 19:59:07.837357 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:07.837366 | orchestrator | 2025-06-22 19:59:07.837376 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-06-22 19:59:07.837398 | orchestrator | Sunday 22 June 2025 19:55:36 +0000 (0:00:00.740) 0:02:51.007 *********** 2025-06-22 19:59:07.837408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-22 19:59:07.837419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-22 19:59:07.837429 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:07.837439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-22 19:59:07.837449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-22 19:59:07.837470 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:07.837480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-22 19:59:07.837501 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-06-22 19:59:07.837511 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:07.837521 | orchestrator | 2025-06-22 19:59:07.837531 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-06-22 19:59:07.837541 | orchestrator | Sunday 22 June 2025 19:55:37 +0000 (0:00:00.973) 0:02:51.980 *********** 2025-06-22 19:59:07.837551 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:07.837561 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:07.837570 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:07.837580 | orchestrator | 2025-06-22 19:59:07.837590 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-06-22 19:59:07.837600 | orchestrator | Sunday 22 June 2025 19:55:38 +0000 (0:00:01.258) 0:02:53.239 *********** 2025-06-22 19:59:07.837609 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:07.837619 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:07.837628 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:07.837638 | orchestrator | 2025-06-22 19:59:07.837659 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-06-22 19:59:07.837689 | orchestrator | Sunday 22 June 2025 19:55:40 +0000 (0:00:01.816) 0:02:55.055 *********** 2025-06-22 19:59:07.837701 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:07.837711 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:07.837720 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:07.837730 | orchestrator | 2025-06-22 19:59:07.837739 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-06-22 19:59:07.837749 | orchestrator | Sunday 22 June 2025 19:55:40 +0000 (0:00:00.284) 0:02:55.340 *********** 2025-06-22 19:59:07.837759 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:59:07.837769 | orchestrator | 2025-06-22 19:59:07.837779 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-06-22 19:59:07.837789 | orchestrator | Sunday 22 June 2025 19:55:41 +0000 (0:00:01.054) 0:02:56.395 *********** 2025-06-22 19:59:07.837797 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-22 19:59:07.837806 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.837820 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-22 19:59:07.837829 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.837856 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-22 19:59:07.837865 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.837874 | orchestrator | 2025-06-22 19:59:07.837882 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-06-22 19:59:07.837895 | orchestrator | Sunday 22 June 2025 19:55:45 +0000 (0:00:03.383) 0:02:59.779 *********** 2025-06-22 19:59:07.837904 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-22 19:59:07.837912 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.837921 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:07.837946 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-22 19:59:07.837955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.837964 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:07.837972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-22 19:59:07.837986 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.837994 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:07.838002 | orchestrator | 2025-06-22 19:59:07.838010 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-06-22 19:59:07.838043 | orchestrator | Sunday 22 June 2025 19:55:45 +0000 (0:00:00.747) 0:03:00.527 *********** 2025-06-22 19:59:07.838052 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-06-22 19:59:07.838071 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-06-22 19:59:07.838079 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:07.838087 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-06-22 19:59:07.838096 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-06-22 19:59:07.838104 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:07.838112 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-06-22 19:59:07.838120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-06-22 19:59:07.838146 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:07.838155 | orchestrator | 2025-06-22 19:59:07.838163 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-06-22 19:59:07.838172 | orchestrator | Sunday 22 June 2025 19:55:47 +0000 (0:00:01.404) 0:03:01.931 *********** 2025-06-22 19:59:07.838186 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:07.838195 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:07.838203 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:07.838211 | orchestrator | 2025-06-22 19:59:07.838231 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-06-22 19:59:07.838239 | orchestrator | Sunday 22 June 2025 19:55:48 +0000 (0:00:01.388) 0:03:03.320 *********** 2025-06-22 19:59:07.838247 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:07.838255 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:07.838263 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:07.838277 | orchestrator | 2025-06-22 19:59:07.838285 | orchestrator | TASK [include_role : manila] *************************************************** 2025-06-22 19:59:07.838303 | orchestrator | Sunday 22 June 2025 19:55:50 +0000 (0:00:02.129) 0:03:05.449 *********** 2025-06-22 19:59:07.838311 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:59:07.838319 | orchestrator | 2025-06-22 19:59:07.838327 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-06-22 19:59:07.838335 | orchestrator | Sunday 22 June 2025 19:55:51 +0000 (0:00:00.999) 0:03:06.449 *********** 2025-06-22 19:59:07.838344 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-06-22 19:59:07.838352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.838361 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.838370 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.838397 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-06-22 19:59:07.838412 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.838421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.838429 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.838438 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-06-22 19:59:07.838447 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.838460 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.838474 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.838482 | orchestrator | 2025-06-22 19:59:07.838490 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-06-22 19:59:07.838507 | orchestrator | Sunday 22 June 2025 19:55:56 +0000 (0:00:04.313) 0:03:10.763 *********** 2025-06-22 19:59:07.838516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-06-22 19:59:07.838525 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.838548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.838557 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.838571 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:07.838597 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-06-22 19:59:07.838606 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.838615 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.838623 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.838632 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:07.838640 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-06-22 19:59:07.838660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250530', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.838674 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.838683 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.838691 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:07.838699 | orchestrator | 2025-06-22 19:59:07.838707 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-06-22 19:59:07.838716 | orchestrator | Sunday 22 June 2025 19:55:56 +0000 (0:00:00.679) 0:03:11.442 *********** 2025-06-22 19:59:07.838724 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-06-22 19:59:07.838732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-06-22 19:59:07.838741 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:07.838749 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-06-22 19:59:07.838757 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-06-22 19:59:07.838765 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:07.838773 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-06-22 19:59:07.838781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-06-22 19:59:07.838789 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:07.838797 | orchestrator | 2025-06-22 19:59:07.838805 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-06-22 19:59:07.838824 | orchestrator | Sunday 22 June 2025 19:55:57 +0000 (0:00:01.227) 0:03:12.670 *********** 2025-06-22 19:59:07.838833 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:07.838841 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:07.838854 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:07.838862 | orchestrator | 2025-06-22 19:59:07.838869 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-06-22 19:59:07.838878 | orchestrator | Sunday 22 June 2025 19:55:59 +0000 (0:00:01.772) 0:03:14.443 *********** 2025-06-22 19:59:07.838886 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:07.838894 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:07.838902 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:07.838910 | orchestrator | 2025-06-22 19:59:07.838918 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-06-22 19:59:07.838926 | orchestrator | Sunday 22 June 2025 19:56:01 +0000 (0:00:02.173) 0:03:16.616 *********** 2025-06-22 19:59:07.838934 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:59:07.838942 | orchestrator | 2025-06-22 19:59:07.838950 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-06-22 19:59:07.838958 | orchestrator | Sunday 22 June 2025 19:56:02 +0000 (0:00:01.047) 0:03:17.664 *********** 2025-06-22 19:59:07.838966 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-22 19:59:07.838974 | orchestrator | 2025-06-22 19:59:07.838982 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-06-22 19:59:07.838991 | orchestrator | Sunday 22 June 2025 19:56:06 +0000 (0:00:03.269) 0:03:20.933 *********** 2025-06-22 19:59:07.839028 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-22 19:59:07.839047 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-22 19:59:07.839056 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:07.839074 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-22 19:59:07.839089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-22 19:59:07.839097 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:07.839106 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-22 19:59:07.839120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-22 19:59:07.839129 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:07.839137 | orchestrator | 2025-06-22 19:59:07.839145 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-06-22 19:59:07.839153 | orchestrator | Sunday 22 June 2025 19:56:08 +0000 (0:00:02.366) 0:03:23.300 *********** 2025-06-22 19:59:07.839180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-22 19:59:07.839190 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-22 19:59:07.839199 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:07.839208 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-22 19:59:07.839242 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-22 19:59:07.839251 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:07.839259 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-22 19:59:07.839273 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-06-22 19:59:07.839282 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:07.839290 | orchestrator | 2025-06-22 19:59:07.839298 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-06-22 19:59:07.839306 | orchestrator | Sunday 22 June 2025 19:56:10 +0000 (0:00:02.120) 0:03:25.420 *********** 2025-06-22 19:59:07.839315 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-22 19:59:07.839340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-22 19:59:07.839349 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:07.839358 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-22 19:59:07.839367 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-22 19:59:07.839375 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:07.839383 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-22 19:59:07.839397 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-06-22 19:59:07.839405 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:07.839413 | orchestrator | 2025-06-22 19:59:07.839421 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-06-22 19:59:07.839429 | orchestrator | Sunday 22 June 2025 19:56:13 +0000 (0:00:02.346) 0:03:27.767 *********** 2025-06-22 19:59:07.839438 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:07.839446 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:07.839454 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:07.839461 | orchestrator | 2025-06-22 19:59:07.839469 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-06-22 19:59:07.839478 | orchestrator | Sunday 22 June 2025 19:56:15 +0000 (0:00:02.152) 0:03:29.919 *********** 2025-06-22 19:59:07.839486 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:07.839494 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:07.839503 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:07.839521 | orchestrator | 2025-06-22 19:59:07.839529 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-06-22 19:59:07.839537 | orchestrator | Sunday 22 June 2025 19:56:16 +0000 (0:00:01.448) 0:03:31.367 *********** 2025-06-22 19:59:07.839545 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:07.839553 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:07.839561 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:07.839569 | orchestrator | 2025-06-22 19:59:07.839577 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-06-22 19:59:07.839585 | orchestrator | Sunday 22 June 2025 19:56:16 +0000 (0:00:00.318) 0:03:31.686 *********** 2025-06-22 19:59:07.839593 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:59:07.839601 | orchestrator | 2025-06-22 19:59:07.839609 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-06-22 19:59:07.839617 | orchestrator | Sunday 22 June 2025 19:56:18 +0000 (0:00:01.099) 0:03:32.785 *********** 2025-06-22 19:59:07.839643 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250530', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-06-22 19:59:07.839653 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250530', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-06-22 19:59:07.839682 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250530', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-06-22 19:59:07.839691 | orchestrator | 2025-06-22 19:59:07.839699 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-06-22 19:59:07.839707 | orchestrator | Sunday 22 June 2025 19:56:19 +0000 (0:00:01.784) 0:03:34.570 *********** 2025-06-22 19:59:07.839715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250530', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-06-22 19:59:07.839724 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:07.839732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250530', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-06-22 19:59:07.839741 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:07.839769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250530', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-06-22 19:59:07.839779 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:07.839787 | orchestrator | 2025-06-22 19:59:07.839795 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-06-22 19:59:07.839808 | orchestrator | Sunday 22 June 2025 19:56:20 +0000 (0:00:00.398) 0:03:34.968 *********** 2025-06-22 19:59:07.839817 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-06-22 19:59:07.839825 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:07.839833 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-06-22 19:59:07.839841 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:07.839849 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-06-22 19:59:07.839858 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:07.839866 | orchestrator | 2025-06-22 19:59:07.839874 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-06-22 19:59:07.839882 | orchestrator | Sunday 22 June 2025 19:56:20 +0000 (0:00:00.586) 0:03:35.554 *********** 2025-06-22 19:59:07.839891 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:07.839898 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:07.839906 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:07.839914 | orchestrator | 2025-06-22 19:59:07.839922 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-06-22 19:59:07.839930 | orchestrator | Sunday 22 June 2025 19:56:21 +0000 (0:00:00.746) 0:03:36.300 *********** 2025-06-22 19:59:07.839938 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:07.839946 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:07.839955 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:07.839962 | orchestrator | 2025-06-22 19:59:07.839971 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-06-22 19:59:07.839989 | orchestrator | Sunday 22 June 2025 19:56:22 +0000 (0:00:01.316) 0:03:37.617 *********** 2025-06-22 19:59:07.839997 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:07.840005 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:07.840012 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:07.840020 | orchestrator | 2025-06-22 19:59:07.840028 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-06-22 19:59:07.840036 | orchestrator | Sunday 22 June 2025 19:56:23 +0000 (0:00:00.319) 0:03:37.937 *********** 2025-06-22 19:59:07.840044 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:59:07.840052 | orchestrator | 2025-06-22 19:59:07.840060 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-06-22 19:59:07.840069 | orchestrator | Sunday 22 June 2025 19:56:24 +0000 (0:00:01.433) 0:03:39.370 *********** 2025-06-22 19:59:07.840078 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-22 19:59:07.840117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.1.1.20250530', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.840128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.840137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.840145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-22 19:59:07.840154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.840163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.1.1.20250530', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-22 19:59:07.840194 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-22 19:59:07.840204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.840213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 19:59:07.840260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.840269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-22 19:59:07.840277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-22 19:59:07.840309 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.1.1.20250530', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.840319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-22 19:59:07.840328 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-22 19:59:07.840337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.840345 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-22 19:59:07.840373 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.1.1.20250530', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.840383 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.840391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.840400 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-22 19:59:07.840408 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.840422 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.1.1.20250530', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-22 19:59:07.840431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-22 19:59:07.840456 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.840465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 19:59:07.840474 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.840482 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-22 19:59:07.840491 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-22 19:59:07.840514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.1.1.20250530', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.840540 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-22 19:59:07.840559 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-22 19:59:07.840568 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.840577 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-22 19:59:07.840594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.1.1.20250530', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.840621 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.840630 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.840639 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-22 19:59:07.840648 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.840684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.1.1.20250530', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-22 19:59:07.840693 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-22 19:59:07.840720 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.840729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 19:59:07.840746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.840755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-22 19:59:07.840762 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-22 19:59:07.840774 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.1.1.20250530', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.840796 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_2025-06-22 19:59:07 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:59:07.840805 | orchestrator | backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-22 19:59:07.840821 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-22 19:59:07.840829 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.840836 | orchestrator | 2025-06-22 19:59:07.840843 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-06-22 19:59:07.840850 | orchestrator | Sunday 22 June 2025 19:56:29 +0000 (0:00:04.841) 0:03:44.212 *********** 2025-06-22 19:59:07.840857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-22 19:59:07.840870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.1.1.20250530', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.840900 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.840909 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.840916 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-22 19:59:07.840928 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-22 19:59:07.840935 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.1.1.20250530', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.840964 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.840973 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.840980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.1.1.20250530', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-22 19:59:07.840987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.840999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-22 19:59:07.841006 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.841013 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-22 19:59:07.841035 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 19:59:07.841043 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.841050 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.841062 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.1.1.20250530', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-22 19:59:07.841069 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-22 19:59:07.841076 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-22 19:59:07.841084 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-22 19:59:07.841114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.1.1.20250530', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.841122 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.841130 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-22 19:59:07.841148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-22 19:59:07.841156 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 19:59:07.841178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.1.1.20250530', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.841186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-22 19:59:07.841200 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.841208 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.841236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.841245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-22 19:59:07.841267 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.841275 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:07.841282 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-22 19:59:07.841301 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-06-22 19:59:07.841308 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.1.1.20250530', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.841315 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.841336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-22 19:59:07.841345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.1.1.20250530', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-22 19:59:07.841352 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-22 19:59:07.841364 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-22 19:59:07.841371 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.841378 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.841385 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 19:59:07.841392 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:07.841430 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.841443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-06-22 19:59:07.841450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-06-22 19:59:07.841457 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.1.1.20250530', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.841464 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-06-22 19:59:07.841472 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.1.1.20250530', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-06-22 19:59:07.841490 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.1.1.20250530', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.841502 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:07.841509 | orchestrator | 2025-06-22 19:59:07.841515 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-06-22 19:59:07.841523 | orchestrator | Sunday 22 June 2025 19:56:31 +0000 (0:00:01.591) 0:03:45.803 *********** 2025-06-22 19:59:07.841530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-06-22 19:59:07.841549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-06-22 19:59:07.841564 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:07.841571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-06-22 19:59:07.841578 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-06-22 19:59:07.841585 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:07.841592 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-06-22 19:59:07.841618 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-06-22 19:59:07.841625 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:07.841633 | orchestrator | 2025-06-22 19:59:07.841639 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-06-22 19:59:07.841646 | orchestrator | Sunday 22 June 2025 19:56:33 +0000 (0:00:02.055) 0:03:47.858 *********** 2025-06-22 19:59:07.841653 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:07.841666 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:07.841673 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:07.841680 | orchestrator | 2025-06-22 19:59:07.841687 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-06-22 19:59:07.841694 | orchestrator | Sunday 22 June 2025 19:56:34 +0000 (0:00:01.282) 0:03:49.141 *********** 2025-06-22 19:59:07.841701 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:07.841708 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:07.841714 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:07.841721 | orchestrator | 2025-06-22 19:59:07.841728 | orchestrator | TASK [include_role : placement] ************************************************ 2025-06-22 19:59:07.841735 | orchestrator | Sunday 22 June 2025 19:56:36 +0000 (0:00:02.135) 0:03:51.276 *********** 2025-06-22 19:59:07.841742 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:59:07.841749 | orchestrator | 2025-06-22 19:59:07.841756 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-06-22 19:59:07.841762 | orchestrator | Sunday 22 June 2025 19:56:37 +0000 (0:00:01.178) 0:03:52.455 *********** 2025-06-22 19:59:07.841786 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-22 19:59:07.841799 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-22 19:59:07.841807 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-22 19:59:07.841814 | orchestrator | 2025-06-22 19:59:07.841821 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-06-22 19:59:07.841828 | orchestrator | Sunday 22 June 2025 19:56:41 +0000 (0:00:04.214) 0:03:56.670 *********** 2025-06-22 19:59:07.841835 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-22 19:59:07.841842 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:07.841850 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-22 19:59:07.841860 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:07.841883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-22 19:59:07.841891 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:07.841898 | orchestrator | 2025-06-22 19:59:07.841905 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-06-22 19:59:07.841912 | orchestrator | Sunday 22 June 2025 19:56:42 +0000 (0:00:00.503) 0:03:57.173 *********** 2025-06-22 19:59:07.841919 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-22 19:59:07.841926 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-22 19:59:07.841933 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:07.841940 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-22 19:59:07.841947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-22 19:59:07.841953 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:07.841960 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-22 19:59:07.841967 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-06-22 19:59:07.841974 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:07.841981 | orchestrator | 2025-06-22 19:59:07.841988 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-06-22 19:59:07.841994 | orchestrator | Sunday 22 June 2025 19:56:43 +0000 (0:00:00.736) 0:03:57.909 *********** 2025-06-22 19:59:07.842001 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:07.842008 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:07.842035 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:07.842043 | orchestrator | 2025-06-22 19:59:07.842050 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-06-22 19:59:07.842062 | orchestrator | Sunday 22 June 2025 19:56:44 +0000 (0:00:01.709) 0:03:59.619 *********** 2025-06-22 19:59:07.842068 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:07.842075 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:07.842082 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:07.842089 | orchestrator | 2025-06-22 19:59:07.842096 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-06-22 19:59:07.842102 | orchestrator | Sunday 22 June 2025 19:56:46 +0000 (0:00:02.083) 0:04:01.702 *********** 2025-06-22 19:59:07.842109 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:59:07.842116 | orchestrator | 2025-06-22 19:59:07.842123 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-06-22 19:59:07.842130 | orchestrator | Sunday 22 June 2025 19:56:48 +0000 (0:00:01.307) 0:04:03.010 *********** 2025-06-22 19:59:07.842154 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-22 19:59:07.842163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.842170 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.842178 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-22 19:59:07.842192 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.842224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.842233 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-22 19:59:07.842240 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.842247 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.842259 | orchestrator | 2025-06-22 19:59:07.842267 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-06-22 19:59:07.842274 | orchestrator | Sunday 22 June 2025 19:56:53 +0000 (0:00:04.795) 0:04:07.806 *********** 2025-06-22 19:59:07.842281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-22 19:59:07.842304 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.842312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.842319 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:07.842327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-22 19:59:07.842339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.842346 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.842353 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:07.842375 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-22 19:59:07.842383 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.842390 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.842402 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:07.842409 | orchestrator | 2025-06-22 19:59:07.842416 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-06-22 19:59:07.842423 | orchestrator | Sunday 22 June 2025 19:56:54 +0000 (0:00:01.035) 0:04:08.841 *********** 2025-06-22 19:59:07.842430 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-22 19:59:07.842437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-22 19:59:07.842444 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-22 19:59:07.842451 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-22 19:59:07.842458 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:07.842465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-22 19:59:07.842471 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-22 19:59:07.842479 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-22 19:59:07.842485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-22 19:59:07.842493 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:07.842515 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-22 19:59:07.842522 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-06-22 19:59:07.842530 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-22 19:59:07.842537 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-06-22 19:59:07.842543 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:07.842550 | orchestrator | 2025-06-22 19:59:07.842557 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-06-22 19:59:07.842564 | orchestrator | Sunday 22 June 2025 19:56:54 +0000 (0:00:00.877) 0:04:09.718 *********** 2025-06-22 19:59:07.842571 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:07.842582 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:07.842589 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:07.842596 | orchestrator | 2025-06-22 19:59:07.842603 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-06-22 19:59:07.842609 | orchestrator | Sunday 22 June 2025 19:56:56 +0000 (0:00:01.697) 0:04:11.416 *********** 2025-06-22 19:59:07.842616 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:07.842623 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:07.842630 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:07.842636 | orchestrator | 2025-06-22 19:59:07.842643 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-06-22 19:59:07.842650 | orchestrator | Sunday 22 June 2025 19:56:58 +0000 (0:00:02.048) 0:04:13.465 *********** 2025-06-22 19:59:07.842657 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:59:07.842664 | orchestrator | 2025-06-22 19:59:07.842671 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-06-22 19:59:07.842677 | orchestrator | Sunday 22 June 2025 19:57:00 +0000 (0:00:01.438) 0:04:14.904 *********** 2025-06-22 19:59:07.842684 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-06-22 19:59:07.842692 | orchestrator | 2025-06-22 19:59:07.842698 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-06-22 19:59:07.842705 | orchestrator | Sunday 22 June 2025 19:57:01 +0000 (0:00:01.148) 0:04:16.052 *********** 2025-06-22 19:59:07.842712 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-06-22 19:59:07.842719 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-06-22 19:59:07.842727 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-06-22 19:59:07.842734 | orchestrator | 2025-06-22 19:59:07.842741 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-06-22 19:59:07.842747 | orchestrator | Sunday 22 June 2025 19:57:05 +0000 (0:00:04.079) 0:04:20.131 *********** 2025-06-22 19:59:07.842769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-22 19:59:07.842777 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:07.842788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-22 19:59:07.842796 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:07.842803 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-22 19:59:07.842810 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:07.842817 | orchestrator | 2025-06-22 19:59:07.842824 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-06-22 19:59:07.842831 | orchestrator | Sunday 22 June 2025 19:57:06 +0000 (0:00:01.407) 0:04:21.538 *********** 2025-06-22 19:59:07.842838 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-22 19:59:07.842849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-22 19:59:07.842856 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-22 19:59:07.842864 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:07.842871 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-22 19:59:07.842878 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:07.842885 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-22 19:59:07.842892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-06-22 19:59:07.842899 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:07.842906 | orchestrator | 2025-06-22 19:59:07.842913 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-06-22 19:59:07.842920 | orchestrator | Sunday 22 June 2025 19:57:08 +0000 (0:00:01.820) 0:04:23.359 *********** 2025-06-22 19:59:07.842927 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:07.842933 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:07.842940 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:07.842947 | orchestrator | 2025-06-22 19:59:07.842954 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-06-22 19:59:07.842960 | orchestrator | Sunday 22 June 2025 19:57:11 +0000 (0:00:02.422) 0:04:25.782 *********** 2025-06-22 19:59:07.842967 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:07.842978 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:07.842985 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:07.842991 | orchestrator | 2025-06-22 19:59:07.842998 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-06-22 19:59:07.843005 | orchestrator | Sunday 22 June 2025 19:57:13 +0000 (0:00:02.830) 0:04:28.612 *********** 2025-06-22 19:59:07.843012 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-06-22 19:59:07.843019 | orchestrator | 2025-06-22 19:59:07.843041 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-06-22 19:59:07.843048 | orchestrator | Sunday 22 June 2025 19:57:14 +0000 (0:00:00.752) 0:04:29.365 *********** 2025-06-22 19:59:07.843056 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-22 19:59:07.843063 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:07.843070 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-22 19:59:07.843077 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:07.843084 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-22 19:59:07.843091 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:07.843098 | orchestrator | 2025-06-22 19:59:07.843105 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-06-22 19:59:07.843112 | orchestrator | Sunday 22 June 2025 19:57:16 +0000 (0:00:01.436) 0:04:30.802 *********** 2025-06-22 19:59:07.843118 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-22 19:59:07.843126 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:07.843133 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-22 19:59:07.843144 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:07.843151 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-06-22 19:59:07.843158 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:07.843164 | orchestrator | 2025-06-22 19:59:07.843171 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-06-22 19:59:07.843178 | orchestrator | Sunday 22 June 2025 19:57:17 +0000 (0:00:01.585) 0:04:32.387 *********** 2025-06-22 19:59:07.843185 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:07.843192 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:07.843198 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:07.843205 | orchestrator | 2025-06-22 19:59:07.843266 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-06-22 19:59:07.843276 | orchestrator | Sunday 22 June 2025 19:57:18 +0000 (0:00:01.198) 0:04:33.586 *********** 2025-06-22 19:59:07.843283 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:07.843290 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:07.843296 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:07.843303 | orchestrator | 2025-06-22 19:59:07.843310 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-06-22 19:59:07.843317 | orchestrator | Sunday 22 June 2025 19:57:21 +0000 (0:00:02.414) 0:04:36.000 *********** 2025-06-22 19:59:07.843323 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:07.843330 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:07.843337 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:07.843343 | orchestrator | 2025-06-22 19:59:07.843350 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-06-22 19:59:07.843357 | orchestrator | Sunday 22 June 2025 19:57:24 +0000 (0:00:03.034) 0:04:39.034 *********** 2025-06-22 19:59:07.843364 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-06-22 19:59:07.843370 | orchestrator | 2025-06-22 19:59:07.843377 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-06-22 19:59:07.843384 | orchestrator | Sunday 22 June 2025 19:57:25 +0000 (0:00:01.082) 0:04:40.117 *********** 2025-06-22 19:59:07.843391 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-22 19:59:07.843398 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:07.843405 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-22 19:59:07.843412 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:07.843419 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-22 19:59:07.843432 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:07.843439 | orchestrator | 2025-06-22 19:59:07.843445 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-06-22 19:59:07.843452 | orchestrator | Sunday 22 June 2025 19:57:26 +0000 (0:00:01.021) 0:04:41.138 *********** 2025-06-22 19:59:07.843459 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-22 19:59:07.843465 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:07.843472 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-22 19:59:07.843479 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:07.843500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-06-22 19:59:07.843508 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:07.843514 | orchestrator | 2025-06-22 19:59:07.843521 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-06-22 19:59:07.843527 | orchestrator | Sunday 22 June 2025 19:57:27 +0000 (0:00:01.241) 0:04:42.380 *********** 2025-06-22 19:59:07.843533 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:07.843539 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:07.843546 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:07.843552 | orchestrator | 2025-06-22 19:59:07.843558 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-06-22 19:59:07.843565 | orchestrator | Sunday 22 June 2025 19:57:29 +0000 (0:00:01.822) 0:04:44.203 *********** 2025-06-22 19:59:07.843571 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:07.843577 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:07.843583 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:07.843589 | orchestrator | 2025-06-22 19:59:07.843596 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-06-22 19:59:07.843602 | orchestrator | Sunday 22 June 2025 19:57:31 +0000 (0:00:02.425) 0:04:46.628 *********** 2025-06-22 19:59:07.843608 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:07.843614 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:07.843620 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:07.843631 | orchestrator | 2025-06-22 19:59:07.843637 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-06-22 19:59:07.843644 | orchestrator | Sunday 22 June 2025 19:57:35 +0000 (0:00:03.262) 0:04:49.890 *********** 2025-06-22 19:59:07.843650 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:59:07.843656 | orchestrator | 2025-06-22 19:59:07.843662 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-06-22 19:59:07.843669 | orchestrator | Sunday 22 June 2025 19:57:36 +0000 (0:00:01.321) 0:04:51.212 *********** 2025-06-22 19:59:07.843675 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-22 19:59:07.843682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-22 19:59:07.843689 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-22 19:59:07.843712 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-22 19:59:07.843724 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-22 19:59:07.843742 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.843752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-22 19:59:07.843764 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-22 19:59:07.843774 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-22 19:59:07.843801 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.843809 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-06-22 19:59:07.843820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-22 19:59:07.843827 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-22 19:59:07.843833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-22 19:59:07.843840 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.843846 | orchestrator | 2025-06-22 19:59:07.843852 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-06-22 19:59:07.843859 | orchestrator | Sunday 22 June 2025 19:57:40 +0000 (0:00:03.682) 0:04:54.895 *********** 2025-06-22 19:59:07.843880 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-22 19:59:07.843891 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-22 19:59:07.843898 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-22 19:59:07.843905 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-22 19:59:07.843911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.843918 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:07.843924 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-22 19:59:07.843955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-22 19:59:07.843967 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-22 19:59:07.843974 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-22 19:59:07.843980 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.843987 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:07.843993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-06-22 19:59:07.844000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-06-22 19:59:07.844021 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-06-22 19:59:07.844032 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-06-22 19:59:07.844038 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250530', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-06-22 19:59:07.844045 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:07.844051 | orchestrator | 2025-06-22 19:59:07.844057 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-06-22 19:59:07.844064 | orchestrator | Sunday 22 June 2025 19:57:40 +0000 (0:00:00.728) 0:04:55.623 *********** 2025-06-22 19:59:07.844070 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-22 19:59:07.844077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-22 19:59:07.844083 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:07.844089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-22 19:59:07.844096 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-22 19:59:07.844102 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:07.844109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-22 19:59:07.844115 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-06-22 19:59:07.844121 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:07.844128 | orchestrator | 2025-06-22 19:59:07.844134 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-06-22 19:59:07.844140 | orchestrator | Sunday 22 June 2025 19:57:41 +0000 (0:00:00.871) 0:04:56.495 *********** 2025-06-22 19:59:07.844146 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:07.844153 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:07.844159 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:07.844165 | orchestrator | 2025-06-22 19:59:07.844171 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-06-22 19:59:07.844177 | orchestrator | Sunday 22 June 2025 19:57:43 +0000 (0:00:01.838) 0:04:58.333 *********** 2025-06-22 19:59:07.844188 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:07.844194 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:07.844200 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:07.844206 | orchestrator | 2025-06-22 19:59:07.844212 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-06-22 19:59:07.844234 | orchestrator | Sunday 22 June 2025 19:57:45 +0000 (0:00:02.121) 0:05:00.455 *********** 2025-06-22 19:59:07.844240 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:59:07.844247 | orchestrator | 2025-06-22 19:59:07.844253 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-06-22 19:59:07.844260 | orchestrator | Sunday 22 June 2025 19:57:47 +0000 (0:00:01.318) 0:05:01.774 *********** 2025-06-22 19:59:07.844281 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-22 19:59:07.844289 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-22 19:59:07.844295 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-22 19:59:07.844303 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-22 19:59:07.844332 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-22 19:59:07.844340 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-22 19:59:07.844347 | orchestrator | 2025-06-22 19:59:07.844353 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-06-22 19:59:07.844360 | orchestrator | Sunday 22 June 2025 19:57:52 +0000 (0:00:05.239) 0:05:07.013 *********** 2025-06-22 19:59:07.844366 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-22 19:59:07.844373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-22 19:59:07.844384 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:07.844405 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-22 19:59:07.844413 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-22 19:59:07.844420 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:07.844426 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-22 19:59:07.844433 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-22 19:59:07.844444 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:07.844451 | orchestrator | 2025-06-22 19:59:07.844457 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-06-22 19:59:07.844463 | orchestrator | Sunday 22 June 2025 19:57:53 +0000 (0:00:01.009) 0:05:08.023 *********** 2025-06-22 19:59:07.844470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-06-22 19:59:07.844490 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-22 19:59:07.844498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-22 19:59:07.844504 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:07.844511 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-06-22 19:59:07.844517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-22 19:59:07.844523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-22 19:59:07.844530 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:07.844536 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-06-22 19:59:07.844543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-22 19:59:07.844549 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-06-22 19:59:07.844556 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:07.844562 | orchestrator | 2025-06-22 19:59:07.844568 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-06-22 19:59:07.844575 | orchestrator | Sunday 22 June 2025 19:57:54 +0000 (0:00:00.931) 0:05:08.955 *********** 2025-06-22 19:59:07.844581 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:07.844587 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:07.844597 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:07.844604 | orchestrator | 2025-06-22 19:59:07.844610 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-06-22 19:59:07.844616 | orchestrator | Sunday 22 June 2025 19:57:54 +0000 (0:00:00.447) 0:05:09.402 *********** 2025-06-22 19:59:07.844623 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:07.844629 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:07.844635 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:07.844641 | orchestrator | 2025-06-22 19:59:07.844648 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-06-22 19:59:07.844654 | orchestrator | Sunday 22 June 2025 19:57:56 +0000 (0:00:01.431) 0:05:10.834 *********** 2025-06-22 19:59:07.844660 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:59:07.844666 | orchestrator | 2025-06-22 19:59:07.844673 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-06-22 19:59:07.844679 | orchestrator | Sunday 22 June 2025 19:57:57 +0000 (0:00:01.683) 0:05:12.517 *********** 2025-06-22 19:59:07.844685 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-22 19:59:07.844707 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-22 19:59:07.844714 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:59:07.844721 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:59:07.844728 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-22 19:59:07.844739 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-22 19:59:07.844745 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-22 19:59:07.844752 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-22 19:59:07.844772 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:59:07.844779 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:59:07.844786 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-22 19:59:07.844793 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-22 19:59:07.844805 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:59:07.844812 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:59:07.844818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-22 19:59:07.844832 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-22 19:59:07.844839 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-22 19:59:07.844850 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:59:07.844856 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-22 19:59:07.844863 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-22 19:59:07.844878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:59:07.844885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-22 19:59:07.844892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-22 19:59:07.844904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-22 19:59:07.844911 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:59:07.844918 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:59:07.844924 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:59:07.844940 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:59:07.844947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-22 19:59:07.844957 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-22 19:59:07.844964 | orchestrator | 2025-06-22 19:59:07.844970 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-06-22 19:59:07.844977 | orchestrator | Sunday 22 June 2025 19:58:01 +0000 (0:00:03.972) 0:05:16.490 *********** 2025-06-22 19:59:07.844983 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-06-22 19:59:07.844990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-22 19:59:07.844996 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:59:07.845003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:59:07.845016 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-22 19:59:07.845023 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-06-22 19:59:07.845034 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-22 19:59:07.845041 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:59:07.845048 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:59:07.845054 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-22 19:59:07.845061 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:07.845074 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-06-22 19:59:07.845087 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-22 19:59:07.845094 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:59:07.845100 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:59:07.845107 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-22 19:59:07.845114 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-06-22 19:59:07.845131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-06-22 19:59:07.845143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-22 19:59:07.845150 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-22 19:59:07.845156 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:59:07.845163 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:59:07.845170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:59:07.845176 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:59:07.845190 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-22 19:59:07.845200 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:07.845207 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-22 19:59:07.845214 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-06-22 19:59:07.845236 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-06-22 19:59:07.845243 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:59:07.845250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 19:59:07.845260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-22 19:59:07.845272 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:07.845278 | orchestrator | 2025-06-22 19:59:07.845284 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-06-22 19:59:07.845347 | orchestrator | Sunday 22 June 2025 19:58:02 +0000 (0:00:00.982) 0:05:17.472 *********** 2025-06-22 19:59:07.845365 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-06-22 19:59:07.845372 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-06-22 19:59:07.845378 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-06-22 19:59:07.845385 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-22 19:59:07.845392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-06-22 19:59:07.845399 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-22 19:59:07.845406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-22 19:59:07.845412 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:07.845419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-22 19:59:07.845425 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:07.845431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-06-22 19:59:07.845438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-06-22 19:59:07.845444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-22 19:59:07.845451 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-06-22 19:59:07.845461 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:07.845468 | orchestrator | 2025-06-22 19:59:07.845474 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-06-22 19:59:07.845480 | orchestrator | Sunday 22 June 2025 19:58:03 +0000 (0:00:00.905) 0:05:18.377 *********** 2025-06-22 19:59:07.845486 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:07.845493 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:07.845499 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:07.845505 | orchestrator | 2025-06-22 19:59:07.845511 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-06-22 19:59:07.845526 | orchestrator | Sunday 22 June 2025 19:58:04 +0000 (0:00:00.419) 0:05:18.797 *********** 2025-06-22 19:59:07.845533 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:07.845539 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:07.845545 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:07.845551 | orchestrator | 2025-06-22 19:59:07.845558 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-06-22 19:59:07.845564 | orchestrator | Sunday 22 June 2025 19:58:05 +0000 (0:00:01.389) 0:05:20.187 *********** 2025-06-22 19:59:07.845571 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:59:07.845577 | orchestrator | 2025-06-22 19:59:07.845583 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-06-22 19:59:07.845589 | orchestrator | Sunday 22 June 2025 19:58:06 +0000 (0:00:01.506) 0:05:21.693 *********** 2025-06-22 19:59:07.845596 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-22 19:59:07.845604 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-22 19:59:07.845611 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-06-22 19:59:07.845623 | orchestrator | 2025-06-22 19:59:07.845630 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-06-22 19:59:07.845636 | orchestrator | Sunday 22 June 2025 19:58:09 +0000 (0:00:02.241) 0:05:23.935 *********** 2025-06-22 19:59:07.845651 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-06-22 19:59:07.845658 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:07.845665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-06-22 19:59:07.845672 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:07.845678 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-06-22 19:59:07.845689 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:07.845695 | orchestrator | 2025-06-22 19:59:07.845702 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-06-22 19:59:07.845708 | orchestrator | Sunday 22 June 2025 19:58:09 +0000 (0:00:00.369) 0:05:24.305 *********** 2025-06-22 19:59:07.845714 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-06-22 19:59:07.845721 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:07.845727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-06-22 19:59:07.845733 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:07.845739 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-06-22 19:59:07.845746 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:07.845752 | orchestrator | 2025-06-22 19:59:07.845758 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-06-22 19:59:07.845764 | orchestrator | Sunday 22 June 2025 19:58:10 +0000 (0:00:00.980) 0:05:25.285 *********** 2025-06-22 19:59:07.845771 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:07.845777 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:07.845783 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:07.845789 | orchestrator | 2025-06-22 19:59:07.845795 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-06-22 19:59:07.845802 | orchestrator | Sunday 22 June 2025 19:58:10 +0000 (0:00:00.435) 0:05:25.721 *********** 2025-06-22 19:59:07.845808 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:07.845814 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:07.845820 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:07.845827 | orchestrator | 2025-06-22 19:59:07.845840 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-06-22 19:59:07.845847 | orchestrator | Sunday 22 June 2025 19:58:12 +0000 (0:00:01.497) 0:05:27.218 *********** 2025-06-22 19:59:07.845853 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 19:59:07.845860 | orchestrator | 2025-06-22 19:59:07.845866 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-06-22 19:59:07.845872 | orchestrator | Sunday 22 June 2025 19:58:14 +0000 (0:00:01.768) 0:05:28.987 *********** 2025-06-22 19:59:07.845879 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-06-22 19:59:07.845887 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-06-22 19:59:07.845898 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-06-22 19:59:07.845911 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-06-22 19:59:07.845919 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-06-22 19:59:07.845926 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-06-22 19:59:07.845936 | orchestrator | 2025-06-22 19:59:07.845943 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-06-22 19:59:07.845949 | orchestrator | Sunday 22 June 2025 19:58:20 +0000 (0:00:06.143) 0:05:35.130 *********** 2025-06-22 19:59:07.845956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-06-22 19:59:07.845963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-06-22 19:59:07.845969 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:07.845983 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-06-22 19:59:07.845990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-06-22 19:59:07.846001 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:07.846007 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-06-22 19:59:07.846037 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250530', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-06-22 19:59:07.846046 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:07.846053 | orchestrator | 2025-06-22 19:59:07.846059 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-06-22 19:59:07.846065 | orchestrator | Sunday 22 June 2025 19:58:21 +0000 (0:00:00.636) 0:05:35.766 *********** 2025-06-22 19:59:07.846079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-22 19:59:07.846086 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-22 19:59:07.846092 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-22 19:59:07.846099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-22 19:59:07.846105 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:07.846112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-22 19:59:07.846122 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-22 19:59:07.846129 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-22 19:59:07.846135 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-22 19:59:07.846142 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:07.846148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-22 19:59:07.846155 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-06-22 19:59:07.846161 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-22 19:59:07.846168 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-06-22 19:59:07.846174 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:07.846181 | orchestrator | 2025-06-22 19:59:07.846187 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-06-22 19:59:07.846194 | orchestrator | Sunday 22 June 2025 19:58:22 +0000 (0:00:01.665) 0:05:37.431 *********** 2025-06-22 19:59:07.846200 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:07.846206 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:07.846213 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:07.846232 | orchestrator | 2025-06-22 19:59:07.846239 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-06-22 19:59:07.846245 | orchestrator | Sunday 22 June 2025 19:58:24 +0000 (0:00:01.392) 0:05:38.824 *********** 2025-06-22 19:59:07.846252 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:07.846258 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:07.846264 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:07.846270 | orchestrator | 2025-06-22 19:59:07.846276 | orchestrator | TASK [include_role : swift] **************************************************** 2025-06-22 19:59:07.846283 | orchestrator | Sunday 22 June 2025 19:58:26 +0000 (0:00:02.338) 0:05:41.163 *********** 2025-06-22 19:59:07.846289 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:07.846295 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:07.846302 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:07.846308 | orchestrator | 2025-06-22 19:59:07.846314 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-06-22 19:59:07.846320 | orchestrator | Sunday 22 June 2025 19:58:26 +0000 (0:00:00.307) 0:05:41.470 *********** 2025-06-22 19:59:07.846327 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:07.846333 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:07.846339 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:07.846345 | orchestrator | 2025-06-22 19:59:07.846351 | orchestrator | TASK [include_role : trove] **************************************************** 2025-06-22 19:59:07.846358 | orchestrator | Sunday 22 June 2025 19:58:27 +0000 (0:00:00.600) 0:05:42.071 *********** 2025-06-22 19:59:07.846364 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:07.846377 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:07.846383 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:07.846389 | orchestrator | 2025-06-22 19:59:07.846403 | orchestrator | TASK [include_role : venus] **************************************************** 2025-06-22 19:59:07.846409 | orchestrator | Sunday 22 June 2025 19:58:27 +0000 (0:00:00.323) 0:05:42.394 *********** 2025-06-22 19:59:07.846416 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:07.846422 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:07.846429 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:07.846435 | orchestrator | 2025-06-22 19:59:07.846441 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-06-22 19:59:07.846448 | orchestrator | Sunday 22 June 2025 19:58:27 +0000 (0:00:00.318) 0:05:42.712 *********** 2025-06-22 19:59:07.846454 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:07.846460 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:07.846466 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:07.846473 | orchestrator | 2025-06-22 19:59:07.846479 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-06-22 19:59:07.846485 | orchestrator | Sunday 22 June 2025 19:58:28 +0000 (0:00:00.341) 0:05:43.054 *********** 2025-06-22 19:59:07.846492 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:07.846498 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:07.846504 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:07.846510 | orchestrator | 2025-06-22 19:59:07.846516 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-06-22 19:59:07.846523 | orchestrator | Sunday 22 June 2025 19:58:29 +0000 (0:00:00.832) 0:05:43.886 *********** 2025-06-22 19:59:07.846529 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:07.846535 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:07.846542 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:07.846548 | orchestrator | 2025-06-22 19:59:07.846554 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-06-22 19:59:07.846561 | orchestrator | Sunday 22 June 2025 19:58:29 +0000 (0:00:00.688) 0:05:44.575 *********** 2025-06-22 19:59:07.846567 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:07.846573 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:07.846579 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:07.846586 | orchestrator | 2025-06-22 19:59:07.846592 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-06-22 19:59:07.846598 | orchestrator | Sunday 22 June 2025 19:58:30 +0000 (0:00:00.337) 0:05:44.912 *********** 2025-06-22 19:59:07.846605 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:07.846611 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:07.846617 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:07.846623 | orchestrator | 2025-06-22 19:59:07.846630 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-06-22 19:59:07.846636 | orchestrator | Sunday 22 June 2025 19:58:31 +0000 (0:00:01.155) 0:05:46.067 *********** 2025-06-22 19:59:07.846642 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:07.846649 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:07.846655 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:07.846661 | orchestrator | 2025-06-22 19:59:07.846668 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-06-22 19:59:07.846674 | orchestrator | Sunday 22 June 2025 19:58:32 +0000 (0:00:00.960) 0:05:47.028 *********** 2025-06-22 19:59:07.846680 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:07.846687 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:07.846693 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:07.846699 | orchestrator | 2025-06-22 19:59:07.846705 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-06-22 19:59:07.846711 | orchestrator | Sunday 22 June 2025 19:58:33 +0000 (0:00:00.959) 0:05:47.987 *********** 2025-06-22 19:59:07.846718 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:07.846724 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:07.846731 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:07.846741 | orchestrator | 2025-06-22 19:59:07.846748 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-06-22 19:59:07.846754 | orchestrator | Sunday 22 June 2025 19:58:37 +0000 (0:00:04.549) 0:05:52.537 *********** 2025-06-22 19:59:07.846760 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:07.846767 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:07.846773 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:07.846779 | orchestrator | 2025-06-22 19:59:07.846785 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-06-22 19:59:07.846792 | orchestrator | Sunday 22 June 2025 19:58:40 +0000 (0:00:02.743) 0:05:55.281 *********** 2025-06-22 19:59:07.846798 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:07.846804 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:07.846811 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:07.846817 | orchestrator | 2025-06-22 19:59:07.846824 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-06-22 19:59:07.846830 | orchestrator | Sunday 22 June 2025 19:58:48 +0000 (0:00:07.991) 0:06:03.272 *********** 2025-06-22 19:59:07.846836 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:07.846842 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:07.846848 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:07.846855 | orchestrator | 2025-06-22 19:59:07.846861 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-06-22 19:59:07.846868 | orchestrator | Sunday 22 June 2025 19:58:52 +0000 (0:00:03.802) 0:06:07.074 *********** 2025-06-22 19:59:07.846874 | orchestrator | changed: [testbed-node-1] 2025-06-22 19:59:07.846880 | orchestrator | changed: [testbed-node-2] 2025-06-22 19:59:07.846887 | orchestrator | changed: [testbed-node-0] 2025-06-22 19:59:07.846893 | orchestrator | 2025-06-22 19:59:07.846899 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-06-22 19:59:07.846906 | orchestrator | Sunday 22 June 2025 19:59:00 +0000 (0:00:08.341) 0:06:15.416 *********** 2025-06-22 19:59:07.846912 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:07.846918 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:07.846924 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:07.846931 | orchestrator | 2025-06-22 19:59:07.846937 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-06-22 19:59:07.846943 | orchestrator | Sunday 22 June 2025 19:59:01 +0000 (0:00:00.337) 0:06:15.754 *********** 2025-06-22 19:59:07.846949 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:07.846956 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:07.846962 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:07.846968 | orchestrator | 2025-06-22 19:59:07.846975 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-06-22 19:59:07.846987 | orchestrator | Sunday 22 June 2025 19:59:01 +0000 (0:00:00.690) 0:06:16.444 *********** 2025-06-22 19:59:07.846994 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:07.847000 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:07.847007 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:07.847013 | orchestrator | 2025-06-22 19:59:07.847019 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-06-22 19:59:07.847025 | orchestrator | Sunday 22 June 2025 19:59:02 +0000 (0:00:00.345) 0:06:16.789 *********** 2025-06-22 19:59:07.847032 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:07.847038 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:07.847044 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:07.847050 | orchestrator | 2025-06-22 19:59:07.847057 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-06-22 19:59:07.847064 | orchestrator | Sunday 22 June 2025 19:59:02 +0000 (0:00:00.351) 0:06:17.141 *********** 2025-06-22 19:59:07.847070 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:07.847076 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:07.847083 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:07.847089 | orchestrator | 2025-06-22 19:59:07.847099 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-06-22 19:59:07.847106 | orchestrator | Sunday 22 June 2025 19:59:02 +0000 (0:00:00.362) 0:06:17.503 *********** 2025-06-22 19:59:07.847113 | orchestrator | skipping: [testbed-node-0] 2025-06-22 19:59:07.847119 | orchestrator | skipping: [testbed-node-1] 2025-06-22 19:59:07.847125 | orchestrator | skipping: [testbed-node-2] 2025-06-22 19:59:07.847131 | orchestrator | 2025-06-22 19:59:07.847138 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-06-22 19:59:07.847144 | orchestrator | Sunday 22 June 2025 19:59:03 +0000 (0:00:00.700) 0:06:18.203 *********** 2025-06-22 19:59:07.847150 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:07.847156 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:07.847163 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:07.847169 | orchestrator | 2025-06-22 19:59:07.847176 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-06-22 19:59:07.847182 | orchestrator | Sunday 22 June 2025 19:59:04 +0000 (0:00:00.962) 0:06:19.166 *********** 2025-06-22 19:59:07.847188 | orchestrator | ok: [testbed-node-0] 2025-06-22 19:59:07.847195 | orchestrator | ok: [testbed-node-1] 2025-06-22 19:59:07.847201 | orchestrator | ok: [testbed-node-2] 2025-06-22 19:59:07.847207 | orchestrator | 2025-06-22 19:59:07.847213 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 19:59:07.847257 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-06-22 19:59:07.847264 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-06-22 19:59:07.847270 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-06-22 19:59:07.847276 | orchestrator | 2025-06-22 19:59:07.847283 | orchestrator | 2025-06-22 19:59:07.847289 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 19:59:07.847295 | orchestrator | Sunday 22 June 2025 19:59:05 +0000 (0:00:00.816) 0:06:19.983 *********** 2025-06-22 19:59:07.847302 | orchestrator | =============================================================================== 2025-06-22 19:59:07.847308 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 8.34s 2025-06-22 19:59:07.847314 | orchestrator | loadbalancer : Start backup proxysql container -------------------------- 7.99s 2025-06-22 19:59:07.847320 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.14s 2025-06-22 19:59:07.847327 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 5.93s 2025-06-22 19:59:07.847332 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 5.90s 2025-06-22 19:59:07.847338 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 5.26s 2025-06-22 19:59:07.847343 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.24s 2025-06-22 19:59:07.847349 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.84s 2025-06-22 19:59:07.847354 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.80s 2025-06-22 19:59:07.847360 | orchestrator | sysctl : Setting sysctl values ------------------------------------------ 4.79s 2025-06-22 19:59:07.847365 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 4.55s 2025-06-22 19:59:07.847371 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 4.31s 2025-06-22 19:59:07.847376 | orchestrator | haproxy-config : Copying over placement haproxy config ------------------ 4.21s 2025-06-22 19:59:07.847382 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.08s 2025-06-22 19:59:07.847387 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 4.05s 2025-06-22 19:59:07.847393 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 4.03s 2025-06-22 19:59:07.847402 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 4.01s 2025-06-22 19:59:07.847408 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 3.97s 2025-06-22 19:59:07.847414 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 3.97s 2025-06-22 19:59:07.847419 | orchestrator | loadbalancer : Wait for backup proxysql to start ------------------------ 3.80s 2025-06-22 19:59:10.867434 | orchestrator | 2025-06-22 19:59:10 | INFO  | Task c203ef30-6952-492e-b47a-51e32726175b is in state STARTED 2025-06-22 19:59:10.869850 | orchestrator | 2025-06-22 19:59:10 | INFO  | Task 38d54694-95db-4844-bb72-8967e27ceda1 is in state STARTED 2025-06-22 19:59:10.873661 | orchestrator | 2025-06-22 19:59:10 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:59:10.873699 | orchestrator | 2025-06-22 19:59:10 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:59:13.923272 | orchestrator | 2025-06-22 19:59:13 | INFO  | Task c203ef30-6952-492e-b47a-51e32726175b is in state STARTED 2025-06-22 19:59:13.923373 | orchestrator | 2025-06-22 19:59:13 | INFO  | Task 38d54694-95db-4844-bb72-8967e27ceda1 is in state STARTED 2025-06-22 19:59:13.926127 | orchestrator | 2025-06-22 19:59:13 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:59:13.926158 | orchestrator | 2025-06-22 19:59:13 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:59:16.966717 | orchestrator | 2025-06-22 19:59:16 | INFO  | Task c203ef30-6952-492e-b47a-51e32726175b is in state STARTED 2025-06-22 19:59:16.966817 | orchestrator | 2025-06-22 19:59:16 | INFO  | Task 38d54694-95db-4844-bb72-8967e27ceda1 is in state STARTED 2025-06-22 19:59:16.967065 | orchestrator | 2025-06-22 19:59:16 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:59:16.967087 | orchestrator | 2025-06-22 19:59:16 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:59:19.993517 | orchestrator | 2025-06-22 19:59:19 | INFO  | Task c203ef30-6952-492e-b47a-51e32726175b is in state STARTED 2025-06-22 19:59:19.994196 | orchestrator | 2025-06-22 19:59:19 | INFO  | Task 38d54694-95db-4844-bb72-8967e27ceda1 is in state STARTED 2025-06-22 19:59:19.994903 | orchestrator | 2025-06-22 19:59:19 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:59:19.994925 | orchestrator | 2025-06-22 19:59:19 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:59:23.030328 | orchestrator | 2025-06-22 19:59:23 | INFO  | Task c203ef30-6952-492e-b47a-51e32726175b is in state STARTED 2025-06-22 19:59:23.030440 | orchestrator | 2025-06-22 19:59:23 | INFO  | Task 38d54694-95db-4844-bb72-8967e27ceda1 is in state STARTED 2025-06-22 19:59:23.030652 | orchestrator | 2025-06-22 19:59:23 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:59:23.030751 | orchestrator | 2025-06-22 19:59:23 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:59:26.071690 | orchestrator | 2025-06-22 19:59:26 | INFO  | Task c203ef30-6952-492e-b47a-51e32726175b is in state STARTED 2025-06-22 19:59:26.071897 | orchestrator | 2025-06-22 19:59:26 | INFO  | Task 38d54694-95db-4844-bb72-8967e27ceda1 is in state STARTED 2025-06-22 19:59:26.073673 | orchestrator | 2025-06-22 19:59:26 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:59:26.073790 | orchestrator | 2025-06-22 19:59:26 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:59:29.127168 | orchestrator | 2025-06-22 19:59:29 | INFO  | Task c203ef30-6952-492e-b47a-51e32726175b is in state STARTED 2025-06-22 19:59:29.131223 | orchestrator | 2025-06-22 19:59:29 | INFO  | Task 38d54694-95db-4844-bb72-8967e27ceda1 is in state STARTED 2025-06-22 19:59:29.132079 | orchestrator | 2025-06-22 19:59:29 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:59:29.132106 | orchestrator | 2025-06-22 19:59:29 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:59:32.169106 | orchestrator | 2025-06-22 19:59:32 | INFO  | Task c203ef30-6952-492e-b47a-51e32726175b is in state STARTED 2025-06-22 19:59:32.169674 | orchestrator | 2025-06-22 19:59:32 | INFO  | Task 38d54694-95db-4844-bb72-8967e27ceda1 is in state STARTED 2025-06-22 19:59:32.171256 | orchestrator | 2025-06-22 19:59:32 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:59:32.171289 | orchestrator | 2025-06-22 19:59:32 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:59:35.215016 | orchestrator | 2025-06-22 19:59:35 | INFO  | Task c203ef30-6952-492e-b47a-51e32726175b is in state STARTED 2025-06-22 19:59:35.215899 | orchestrator | 2025-06-22 19:59:35 | INFO  | Task 38d54694-95db-4844-bb72-8967e27ceda1 is in state STARTED 2025-06-22 19:59:35.217182 | orchestrator | 2025-06-22 19:59:35 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:59:35.217296 | orchestrator | 2025-06-22 19:59:35 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:59:38.267509 | orchestrator | 2025-06-22 19:59:38 | INFO  | Task c203ef30-6952-492e-b47a-51e32726175b is in state STARTED 2025-06-22 19:59:38.267681 | orchestrator | 2025-06-22 19:59:38 | INFO  | Task 38d54694-95db-4844-bb72-8967e27ceda1 is in state STARTED 2025-06-22 19:59:38.268552 | orchestrator | 2025-06-22 19:59:38 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:59:38.268576 | orchestrator | 2025-06-22 19:59:38 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:59:41.336314 | orchestrator | 2025-06-22 19:59:41 | INFO  | Task c203ef30-6952-492e-b47a-51e32726175b is in state STARTED 2025-06-22 19:59:41.337420 | orchestrator | 2025-06-22 19:59:41 | INFO  | Task 38d54694-95db-4844-bb72-8967e27ceda1 is in state STARTED 2025-06-22 19:59:41.339254 | orchestrator | 2025-06-22 19:59:41 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:59:41.339510 | orchestrator | 2025-06-22 19:59:41 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:59:44.406461 | orchestrator | 2025-06-22 19:59:44 | INFO  | Task c203ef30-6952-492e-b47a-51e32726175b is in state STARTED 2025-06-22 19:59:44.408069 | orchestrator | 2025-06-22 19:59:44 | INFO  | Task 38d54694-95db-4844-bb72-8967e27ceda1 is in state STARTED 2025-06-22 19:59:44.411474 | orchestrator | 2025-06-22 19:59:44 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:59:44.411508 | orchestrator | 2025-06-22 19:59:44 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:59:47.454466 | orchestrator | 2025-06-22 19:59:47 | INFO  | Task c203ef30-6952-492e-b47a-51e32726175b is in state STARTED 2025-06-22 19:59:47.454549 | orchestrator | 2025-06-22 19:59:47 | INFO  | Task 38d54694-95db-4844-bb72-8967e27ceda1 is in state STARTED 2025-06-22 19:59:47.456376 | orchestrator | 2025-06-22 19:59:47 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:59:47.456573 | orchestrator | 2025-06-22 19:59:47 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:59:50.496803 | orchestrator | 2025-06-22 19:59:50 | INFO  | Task c203ef30-6952-492e-b47a-51e32726175b is in state STARTED 2025-06-22 19:59:50.498134 | orchestrator | 2025-06-22 19:59:50 | INFO  | Task 38d54694-95db-4844-bb72-8967e27ceda1 is in state STARTED 2025-06-22 19:59:50.499892 | orchestrator | 2025-06-22 19:59:50 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:59:50.499921 | orchestrator | 2025-06-22 19:59:50 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:59:53.543716 | orchestrator | 2025-06-22 19:59:53 | INFO  | Task c203ef30-6952-492e-b47a-51e32726175b is in state STARTED 2025-06-22 19:59:53.546914 | orchestrator | 2025-06-22 19:59:53 | INFO  | Task 38d54694-95db-4844-bb72-8967e27ceda1 is in state STARTED 2025-06-22 19:59:53.548724 | orchestrator | 2025-06-22 19:59:53 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:59:53.548768 | orchestrator | 2025-06-22 19:59:53 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:59:56.586582 | orchestrator | 2025-06-22 19:59:56 | INFO  | Task c203ef30-6952-492e-b47a-51e32726175b is in state STARTED 2025-06-22 19:59:56.588091 | orchestrator | 2025-06-22 19:59:56 | INFO  | Task 38d54694-95db-4844-bb72-8967e27ceda1 is in state STARTED 2025-06-22 19:59:56.589544 | orchestrator | 2025-06-22 19:59:56 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:59:56.589818 | orchestrator | 2025-06-22 19:59:56 | INFO  | Wait 1 second(s) until the next check 2025-06-22 19:59:59.631573 | orchestrator | 2025-06-22 19:59:59 | INFO  | Task c203ef30-6952-492e-b47a-51e32726175b is in state STARTED 2025-06-22 19:59:59.632570 | orchestrator | 2025-06-22 19:59:59 | INFO  | Task 38d54694-95db-4844-bb72-8967e27ceda1 is in state STARTED 2025-06-22 19:59:59.634153 | orchestrator | 2025-06-22 19:59:59 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 19:59:59.634206 | orchestrator | 2025-06-22 19:59:59 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:00:02.679487 | orchestrator | 2025-06-22 20:00:02 | INFO  | Task c203ef30-6952-492e-b47a-51e32726175b is in state STARTED 2025-06-22 20:00:02.683702 | orchestrator | 2025-06-22 20:00:02 | INFO  | Task 38d54694-95db-4844-bb72-8967e27ceda1 is in state STARTED 2025-06-22 20:00:02.685362 | orchestrator | 2025-06-22 20:00:02 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 20:00:02.686409 | orchestrator | 2025-06-22 20:00:02 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:00:05.728798 | orchestrator | 2025-06-22 20:00:05 | INFO  | Task c203ef30-6952-492e-b47a-51e32726175b is in state STARTED 2025-06-22 20:00:05.731402 | orchestrator | 2025-06-22 20:00:05 | INFO  | Task 38d54694-95db-4844-bb72-8967e27ceda1 is in state STARTED 2025-06-22 20:00:05.733927 | orchestrator | 2025-06-22 20:00:05 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 20:00:05.733955 | orchestrator | 2025-06-22 20:00:05 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:00:08.778994 | orchestrator | 2025-06-22 20:00:08 | INFO  | Task c203ef30-6952-492e-b47a-51e32726175b is in state STARTED 2025-06-22 20:00:08.781412 | orchestrator | 2025-06-22 20:00:08 | INFO  | Task 38d54694-95db-4844-bb72-8967e27ceda1 is in state STARTED 2025-06-22 20:00:08.783398 | orchestrator | 2025-06-22 20:00:08 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 20:00:08.783770 | orchestrator | 2025-06-22 20:00:08 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:00:11.832377 | orchestrator | 2025-06-22 20:00:11 | INFO  | Task c203ef30-6952-492e-b47a-51e32726175b is in state STARTED 2025-06-22 20:00:11.834845 | orchestrator | 2025-06-22 20:00:11 | INFO  | Task 38d54694-95db-4844-bb72-8967e27ceda1 is in state STARTED 2025-06-22 20:00:11.837049 | orchestrator | 2025-06-22 20:00:11 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 20:00:11.837419 | orchestrator | 2025-06-22 20:00:11 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:00:14.882555 | orchestrator | 2025-06-22 20:00:14 | INFO  | Task c203ef30-6952-492e-b47a-51e32726175b is in state STARTED 2025-06-22 20:00:14.884047 | orchestrator | 2025-06-22 20:00:14 | INFO  | Task 38d54694-95db-4844-bb72-8967e27ceda1 is in state STARTED 2025-06-22 20:00:14.885817 | orchestrator | 2025-06-22 20:00:14 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 20:00:14.885842 | orchestrator | 2025-06-22 20:00:14 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:00:17.946593 | orchestrator | 2025-06-22 20:00:17 | INFO  | Task c203ef30-6952-492e-b47a-51e32726175b is in state STARTED 2025-06-22 20:00:17.948293 | orchestrator | 2025-06-22 20:00:17 | INFO  | Task 38d54694-95db-4844-bb72-8967e27ceda1 is in state STARTED 2025-06-22 20:00:17.950628 | orchestrator | 2025-06-22 20:00:17 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 20:00:17.950991 | orchestrator | 2025-06-22 20:00:17 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:00:20.997498 | orchestrator | 2025-06-22 20:00:20 | INFO  | Task c203ef30-6952-492e-b47a-51e32726175b is in state STARTED 2025-06-22 20:00:20.998951 | orchestrator | 2025-06-22 20:00:20 | INFO  | Task 38d54694-95db-4844-bb72-8967e27ceda1 is in state STARTED 2025-06-22 20:00:21.000680 | orchestrator | 2025-06-22 20:00:20 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 20:00:21.000722 | orchestrator | 2025-06-22 20:00:20 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:00:24.048406 | orchestrator | 2025-06-22 20:00:24 | INFO  | Task c203ef30-6952-492e-b47a-51e32726175b is in state STARTED 2025-06-22 20:00:24.048765 | orchestrator | 2025-06-22 20:00:24 | INFO  | Task 38d54694-95db-4844-bb72-8967e27ceda1 is in state STARTED 2025-06-22 20:00:24.050269 | orchestrator | 2025-06-22 20:00:24 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 20:00:24.050300 | orchestrator | 2025-06-22 20:00:24 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:00:27.096129 | orchestrator | 2025-06-22 20:00:27 | INFO  | Task c203ef30-6952-492e-b47a-51e32726175b is in state STARTED 2025-06-22 20:00:27.096428 | orchestrator | 2025-06-22 20:00:27 | INFO  | Task 38d54694-95db-4844-bb72-8967e27ceda1 is in state STARTED 2025-06-22 20:00:27.100595 | orchestrator | 2025-06-22 20:00:27 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 20:00:27.100631 | orchestrator | 2025-06-22 20:00:27 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:00:30.152059 | orchestrator | 2025-06-22 20:00:30 | INFO  | Task c203ef30-6952-492e-b47a-51e32726175b is in state STARTED 2025-06-22 20:00:30.152565 | orchestrator | 2025-06-22 20:00:30 | INFO  | Task 38d54694-95db-4844-bb72-8967e27ceda1 is in state STARTED 2025-06-22 20:00:30.154954 | orchestrator | 2025-06-22 20:00:30 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 20:00:30.154995 | orchestrator | 2025-06-22 20:00:30 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:00:33.211553 | orchestrator | 2025-06-22 20:00:33 | INFO  | Task c203ef30-6952-492e-b47a-51e32726175b is in state STARTED 2025-06-22 20:00:33.213140 | orchestrator | 2025-06-22 20:00:33 | INFO  | Task 38d54694-95db-4844-bb72-8967e27ceda1 is in state STARTED 2025-06-22 20:00:33.213991 | orchestrator | 2025-06-22 20:00:33 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 20:00:33.214292 | orchestrator | 2025-06-22 20:00:33 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:00:36.277152 | orchestrator | 2025-06-22 20:00:36 | INFO  | Task c203ef30-6952-492e-b47a-51e32726175b is in state STARTED 2025-06-22 20:00:36.280612 | orchestrator | 2025-06-22 20:00:36 | INFO  | Task 38d54694-95db-4844-bb72-8967e27ceda1 is in state STARTED 2025-06-22 20:00:36.282867 | orchestrator | 2025-06-22 20:00:36 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 20:00:36.282917 | orchestrator | 2025-06-22 20:00:36 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:00:39.332117 | orchestrator | 2025-06-22 20:00:39 | INFO  | Task c203ef30-6952-492e-b47a-51e32726175b is in state STARTED 2025-06-22 20:00:39.333897 | orchestrator | 2025-06-22 20:00:39 | INFO  | Task 38d54694-95db-4844-bb72-8967e27ceda1 is in state STARTED 2025-06-22 20:00:39.336654 | orchestrator | 2025-06-22 20:00:39 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 20:00:39.336680 | orchestrator | 2025-06-22 20:00:39 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:00:42.384760 | orchestrator | 2025-06-22 20:00:42 | INFO  | Task c203ef30-6952-492e-b47a-51e32726175b is in state STARTED 2025-06-22 20:00:42.386833 | orchestrator | 2025-06-22 20:00:42 | INFO  | Task 38d54694-95db-4844-bb72-8967e27ceda1 is in state STARTED 2025-06-22 20:00:42.387672 | orchestrator | 2025-06-22 20:00:42 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 20:00:42.387761 | orchestrator | 2025-06-22 20:00:42 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:00:45.438679 | orchestrator | 2025-06-22 20:00:45 | INFO  | Task c203ef30-6952-492e-b47a-51e32726175b is in state STARTED 2025-06-22 20:00:45.440600 | orchestrator | 2025-06-22 20:00:45 | INFO  | Task 38d54694-95db-4844-bb72-8967e27ceda1 is in state STARTED 2025-06-22 20:00:45.442795 | orchestrator | 2025-06-22 20:00:45 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 20:00:45.443087 | orchestrator | 2025-06-22 20:00:45 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:00:48.483491 | orchestrator | 2025-06-22 20:00:48 | INFO  | Task c203ef30-6952-492e-b47a-51e32726175b is in state STARTED 2025-06-22 20:00:48.485515 | orchestrator | 2025-06-22 20:00:48 | INFO  | Task 38d54694-95db-4844-bb72-8967e27ceda1 is in state STARTED 2025-06-22 20:00:48.487668 | orchestrator | 2025-06-22 20:00:48 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 20:00:48.487695 | orchestrator | 2025-06-22 20:00:48 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:00:51.534334 | orchestrator | 2025-06-22 20:00:51 | INFO  | Task c203ef30-6952-492e-b47a-51e32726175b is in state STARTED 2025-06-22 20:00:51.536355 | orchestrator | 2025-06-22 20:00:51 | INFO  | Task 38d54694-95db-4844-bb72-8967e27ceda1 is in state STARTED 2025-06-22 20:00:51.539127 | orchestrator | 2025-06-22 20:00:51 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 20:00:51.539247 | orchestrator | 2025-06-22 20:00:51 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:00:54.589333 | orchestrator | 2025-06-22 20:00:54 | INFO  | Task c203ef30-6952-492e-b47a-51e32726175b is in state STARTED 2025-06-22 20:00:54.590362 | orchestrator | 2025-06-22 20:00:54 | INFO  | Task 38d54694-95db-4844-bb72-8967e27ceda1 is in state STARTED 2025-06-22 20:00:54.592248 | orchestrator | 2025-06-22 20:00:54 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 20:00:54.592288 | orchestrator | 2025-06-22 20:00:54 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:00:57.637716 | orchestrator | 2025-06-22 20:00:57 | INFO  | Task c203ef30-6952-492e-b47a-51e32726175b is in state STARTED 2025-06-22 20:00:57.639802 | orchestrator | 2025-06-22 20:00:57 | INFO  | Task 38d54694-95db-4844-bb72-8967e27ceda1 is in state STARTED 2025-06-22 20:00:57.641717 | orchestrator | 2025-06-22 20:00:57 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 20:00:57.641845 | orchestrator | 2025-06-22 20:00:57 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:01:00.679972 | orchestrator | 2025-06-22 20:01:00 | INFO  | Task c203ef30-6952-492e-b47a-51e32726175b is in state STARTED 2025-06-22 20:01:00.682422 | orchestrator | 2025-06-22 20:01:00 | INFO  | Task 38d54694-95db-4844-bb72-8967e27ceda1 is in state STARTED 2025-06-22 20:01:00.685616 | orchestrator | 2025-06-22 20:01:00 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 20:01:00.685799 | orchestrator | 2025-06-22 20:01:00 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:01:03.735399 | orchestrator | 2025-06-22 20:01:03 | INFO  | Task c203ef30-6952-492e-b47a-51e32726175b is in state STARTED 2025-06-22 20:01:03.737299 | orchestrator | 2025-06-22 20:01:03 | INFO  | Task 38d54694-95db-4844-bb72-8967e27ceda1 is in state STARTED 2025-06-22 20:01:03.738558 | orchestrator | 2025-06-22 20:01:03 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 20:01:03.738741 | orchestrator | 2025-06-22 20:01:03 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:01:06.785609 | orchestrator | 2025-06-22 20:01:06 | INFO  | Task c203ef30-6952-492e-b47a-51e32726175b is in state STARTED 2025-06-22 20:01:06.787448 | orchestrator | 2025-06-22 20:01:06 | INFO  | Task 38d54694-95db-4844-bb72-8967e27ceda1 is in state STARTED 2025-06-22 20:01:06.789441 | orchestrator | 2025-06-22 20:01:06 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 20:01:06.789517 | orchestrator | 2025-06-22 20:01:06 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:01:09.845891 | orchestrator | 2025-06-22 20:01:09 | INFO  | Task c203ef30-6952-492e-b47a-51e32726175b is in state STARTED 2025-06-22 20:01:09.846306 | orchestrator | 2025-06-22 20:01:09 | INFO  | Task 38d54694-95db-4844-bb72-8967e27ceda1 is in state STARTED 2025-06-22 20:01:09.848415 | orchestrator | 2025-06-22 20:01:09 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 20:01:09.848631 | orchestrator | 2025-06-22 20:01:09 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:01:12.903426 | orchestrator | 2025-06-22 20:01:12 | INFO  | Task c203ef30-6952-492e-b47a-51e32726175b is in state STARTED 2025-06-22 20:01:12.905652 | orchestrator | 2025-06-22 20:01:12 | INFO  | Task 38d54694-95db-4844-bb72-8967e27ceda1 is in state STARTED 2025-06-22 20:01:12.909010 | orchestrator | 2025-06-22 20:01:12 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 20:01:12.909040 | orchestrator | 2025-06-22 20:01:12 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:01:15.958328 | orchestrator | 2025-06-22 20:01:15 | INFO  | Task c203ef30-6952-492e-b47a-51e32726175b is in state STARTED 2025-06-22 20:01:15.960851 | orchestrator | 2025-06-22 20:01:15 | INFO  | Task 38d54694-95db-4844-bb72-8967e27ceda1 is in state STARTED 2025-06-22 20:01:15.963234 | orchestrator | 2025-06-22 20:01:15 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 20:01:15.963276 | orchestrator | 2025-06-22 20:01:15 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:01:19.022678 | orchestrator | 2025-06-22 20:01:19 | INFO  | Task c203ef30-6952-492e-b47a-51e32726175b is in state STARTED 2025-06-22 20:01:19.029061 | orchestrator | 2025-06-22 20:01:19 | INFO  | Task 38d54694-95db-4844-bb72-8967e27ceda1 is in state STARTED 2025-06-22 20:01:19.029104 | orchestrator | 2025-06-22 20:01:19 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 20:01:19.029118 | orchestrator | 2025-06-22 20:01:19 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:01:22.064124 | orchestrator | 2025-06-22 20:01:22 | INFO  | Task c203ef30-6952-492e-b47a-51e32726175b is in state STARTED 2025-06-22 20:01:22.065871 | orchestrator | 2025-06-22 20:01:22 | INFO  | Task 38d54694-95db-4844-bb72-8967e27ceda1 is in state STARTED 2025-06-22 20:01:22.067912 | orchestrator | 2025-06-22 20:01:22 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 20:01:22.068135 | orchestrator | 2025-06-22 20:01:22 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:01:25.116057 | orchestrator | 2025-06-22 20:01:25 | INFO  | Task c203ef30-6952-492e-b47a-51e32726175b is in state STARTED 2025-06-22 20:01:25.116280 | orchestrator | 2025-06-22 20:01:25 | INFO  | Task 38d54694-95db-4844-bb72-8967e27ceda1 is in state STARTED 2025-06-22 20:01:25.117270 | orchestrator | 2025-06-22 20:01:25 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 20:01:25.117285 | orchestrator | 2025-06-22 20:01:25 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:01:28.175521 | orchestrator | 2025-06-22 20:01:28 | INFO  | Task c203ef30-6952-492e-b47a-51e32726175b is in state STARTED 2025-06-22 20:01:28.176832 | orchestrator | 2025-06-22 20:01:28 | INFO  | Task 38d54694-95db-4844-bb72-8967e27ceda1 is in state STARTED 2025-06-22 20:01:28.180504 | orchestrator | 2025-06-22 20:01:28 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state STARTED 2025-06-22 20:01:28.180562 | orchestrator | 2025-06-22 20:01:28 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:01:31.234208 | orchestrator | 2025-06-22 20:01:31 | INFO  | Task c203ef30-6952-492e-b47a-51e32726175b is in state STARTED 2025-06-22 20:01:31.235244 | orchestrator | 2025-06-22 20:01:31 | INFO  | Task 38d54694-95db-4844-bb72-8967e27ceda1 is in state STARTED 2025-06-22 20:01:31.240992 | orchestrator | 2025-06-22 20:01:31 | INFO  | Task 25109fe4-dc3c-41f8-a985-abe16a4da884 is in state SUCCESS 2025-06-22 20:01:31.242072 | orchestrator | 2025-06-22 20:01:31 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:01:31.244036 | orchestrator | 2025-06-22 20:01:31.244071 | orchestrator | 2025-06-22 20:01:31.244084 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-06-22 20:01:31.244096 | orchestrator | 2025-06-22 20:01:31.244108 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-06-22 20:01:31.244120 | orchestrator | Sunday 22 June 2025 19:49:50 +0000 (0:00:01.007) 0:00:01.007 *********** 2025-06-22 20:01:31.244133 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:01:31.244176 | orchestrator | 2025-06-22 20:01:31.244194 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-06-22 20:01:31.244210 | orchestrator | Sunday 22 June 2025 19:49:52 +0000 (0:00:01.705) 0:00:02.713 *********** 2025-06-22 20:01:31.244228 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:31.244246 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:31.244258 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:31.244268 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:01:31.244323 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:01:31.244336 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:01:31.244348 | orchestrator | 2025-06-22 20:01:31.244359 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-06-22 20:01:31.244398 | orchestrator | Sunday 22 June 2025 19:49:54 +0000 (0:00:02.490) 0:00:05.203 *********** 2025-06-22 20:01:31.244409 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:31.244420 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:31.244431 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:31.244441 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:01:31.244453 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:01:31.244467 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:01:31.244707 | orchestrator | 2025-06-22 20:01:31.244727 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-06-22 20:01:31.244740 | orchestrator | Sunday 22 June 2025 19:49:55 +0000 (0:00:01.148) 0:00:06.352 *********** 2025-06-22 20:01:31.244752 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:31.244765 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:31.244777 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:31.244789 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:01:31.244801 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:01:31.244812 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:01:31.244825 | orchestrator | 2025-06-22 20:01:31.244838 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-06-22 20:01:31.244850 | orchestrator | Sunday 22 June 2025 19:49:57 +0000 (0:00:01.551) 0:00:07.903 *********** 2025-06-22 20:01:31.244863 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:31.244899 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:31.244913 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:31.244925 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:01:31.244937 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:01:31.244949 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:01:31.244961 | orchestrator | 2025-06-22 20:01:31.244973 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-06-22 20:01:31.244986 | orchestrator | Sunday 22 June 2025 19:49:58 +0000 (0:00:01.228) 0:00:09.131 *********** 2025-06-22 20:01:31.244999 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:31.245010 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:31.245021 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:31.245031 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:01:31.245042 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:01:31.245053 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:01:31.245063 | orchestrator | 2025-06-22 20:01:31.245074 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-06-22 20:01:31.245085 | orchestrator | Sunday 22 June 2025 19:49:59 +0000 (0:00:00.826) 0:00:09.958 *********** 2025-06-22 20:01:31.245097 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:31.245107 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:31.245118 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:31.245129 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:01:31.245170 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:01:31.245184 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:01:31.245195 | orchestrator | 2025-06-22 20:01:31.245206 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-06-22 20:01:31.245217 | orchestrator | Sunday 22 June 2025 19:50:00 +0000 (0:00:01.349) 0:00:11.308 *********** 2025-06-22 20:01:31.245228 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.245240 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.245251 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.245262 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.245272 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.245283 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.245294 | orchestrator | 2025-06-22 20:01:31.245305 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-06-22 20:01:31.245316 | orchestrator | Sunday 22 June 2025 19:50:01 +0000 (0:00:01.123) 0:00:12.431 *********** 2025-06-22 20:01:31.245327 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:31.245337 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:31.245348 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:31.245359 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:01:31.245380 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:01:31.245391 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:01:31.245401 | orchestrator | 2025-06-22 20:01:31.245428 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-06-22 20:01:31.245440 | orchestrator | Sunday 22 June 2025 19:50:03 +0000 (0:00:01.549) 0:00:13.981 *********** 2025-06-22 20:01:31.245450 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-22 20:01:31.245462 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-22 20:01:31.245481 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-22 20:01:31.245760 | orchestrator | 2025-06-22 20:01:31.245775 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-06-22 20:01:31.245786 | orchestrator | Sunday 22 June 2025 19:50:04 +0000 (0:00:01.118) 0:00:15.100 *********** 2025-06-22 20:01:31.245797 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:31.245808 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:31.245818 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:31.245829 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:01:31.245840 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:01:31.245850 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:01:31.245861 | orchestrator | 2025-06-22 20:01:31.245886 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-06-22 20:01:31.245898 | orchestrator | Sunday 22 June 2025 19:50:05 +0000 (0:00:01.226) 0:00:16.326 *********** 2025-06-22 20:01:31.245909 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-22 20:01:31.245920 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-22 20:01:31.245931 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-22 20:01:31.245941 | orchestrator | 2025-06-22 20:01:31.245952 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-06-22 20:01:31.245963 | orchestrator | Sunday 22 June 2025 19:50:09 +0000 (0:00:03.965) 0:00:20.291 *********** 2025-06-22 20:01:31.245974 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-06-22 20:01:31.245985 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-06-22 20:01:31.245996 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-06-22 20:01:31.246104 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.246177 | orchestrator | 2025-06-22 20:01:31.246193 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-06-22 20:01:31.246204 | orchestrator | Sunday 22 June 2025 19:50:10 +0000 (0:00:01.273) 0:00:21.565 *********** 2025-06-22 20:01:31.246218 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-06-22 20:01:31.246233 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-06-22 20:01:31.246292 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-06-22 20:01:31.246306 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.246317 | orchestrator | 2025-06-22 20:01:31.246328 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-06-22 20:01:31.246339 | orchestrator | Sunday 22 June 2025 19:50:12 +0000 (0:00:01.577) 0:00:23.143 *********** 2025-06-22 20:01:31.246352 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:31.246378 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:31.246390 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:31.246401 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.246413 | orchestrator | 2025-06-22 20:01:31.246423 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-06-22 20:01:31.246442 | orchestrator | Sunday 22 June 2025 19:50:13 +0000 (0:00:00.716) 0:00:23.860 *********** 2025-06-22 20:01:31.246470 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-06-22 19:50:06.385499', 'end': '2025-06-22 19:50:06.668902', 'delta': '0:00:00.283403', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-06-22 20:01:31.246494 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-06-22 19:50:07.691519', 'end': '2025-06-22 19:50:08.085320', 'delta': '0:00:00.393801', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-06-22 20:01:31.246513 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-06-22 19:50:08.945429', 'end': '2025-06-22 19:50:09.341031', 'delta': '0:00:00.395602', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-06-22 20:01:31.246532 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.246550 | orchestrator | 2025-06-22 20:01:31.246568 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-06-22 20:01:31.246590 | orchestrator | Sunday 22 June 2025 19:50:13 +0000 (0:00:00.302) 0:00:24.162 *********** 2025-06-22 20:01:31.246618 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:31.246630 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:31.246787 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:31.246798 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:01:31.246809 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:01:31.246820 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:01:31.246830 | orchestrator | 2025-06-22 20:01:31.246841 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-06-22 20:01:31.246852 | orchestrator | Sunday 22 June 2025 19:50:15 +0000 (0:00:01.916) 0:00:26.078 *********** 2025-06-22 20:01:31.246863 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-22 20:01:31.246874 | orchestrator | 2025-06-22 20:01:31.246885 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-06-22 20:01:31.246920 | orchestrator | Sunday 22 June 2025 19:50:16 +0000 (0:00:00.970) 0:00:27.049 *********** 2025-06-22 20:01:31.246932 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.246942 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.246953 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.246964 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.246974 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.246985 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.246996 | orchestrator | 2025-06-22 20:01:31.247007 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-06-22 20:01:31.247042 | orchestrator | Sunday 22 June 2025 19:50:17 +0000 (0:00:01.546) 0:00:28.595 *********** 2025-06-22 20:01:31.247053 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.247064 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.247075 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.247086 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.247120 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.247132 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.247204 | orchestrator | 2025-06-22 20:01:31.247271 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-06-22 20:01:31.247284 | orchestrator | Sunday 22 June 2025 19:50:19 +0000 (0:00:01.239) 0:00:29.834 *********** 2025-06-22 20:01:31.247295 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.247306 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.247317 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.247328 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.247339 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.247350 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.247360 | orchestrator | 2025-06-22 20:01:31.247379 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-06-22 20:01:31.247390 | orchestrator | Sunday 22 June 2025 19:50:20 +0000 (0:00:00.796) 0:00:30.631 *********** 2025-06-22 20:01:31.247401 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.247412 | orchestrator | 2025-06-22 20:01:31.247423 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-06-22 20:01:31.247434 | orchestrator | Sunday 22 June 2025 19:50:20 +0000 (0:00:00.218) 0:00:30.850 *********** 2025-06-22 20:01:31.247445 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.247456 | orchestrator | 2025-06-22 20:01:31.247472 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-06-22 20:01:31.247490 | orchestrator | Sunday 22 June 2025 19:50:20 +0000 (0:00:00.355) 0:00:31.206 *********** 2025-06-22 20:01:31.247509 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.247527 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.247635 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.247645 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.247655 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.247664 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.247674 | orchestrator | 2025-06-22 20:01:31.247692 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-06-22 20:01:31.247711 | orchestrator | Sunday 22 June 2025 19:50:21 +0000 (0:00:00.671) 0:00:31.877 *********** 2025-06-22 20:01:31.247721 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.247731 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.247740 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.247750 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.247759 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.247769 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.247779 | orchestrator | 2025-06-22 20:01:31.247789 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-06-22 20:01:31.247798 | orchestrator | Sunday 22 June 2025 19:50:22 +0000 (0:00:01.169) 0:00:33.047 *********** 2025-06-22 20:01:31.247808 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.247818 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.247827 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.247837 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.247846 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.247856 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.247865 | orchestrator | 2025-06-22 20:01:31.247875 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-06-22 20:01:31.247885 | orchestrator | Sunday 22 June 2025 19:50:23 +0000 (0:00:00.850) 0:00:33.897 *********** 2025-06-22 20:01:31.247894 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.247904 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.247913 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.247923 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.247932 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.247941 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.247951 | orchestrator | 2025-06-22 20:01:31.247961 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-06-22 20:01:31.247970 | orchestrator | Sunday 22 June 2025 19:50:24 +0000 (0:00:01.129) 0:00:35.027 *********** 2025-06-22 20:01:31.247980 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.247990 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.247999 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.248009 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.248018 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.248028 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.248037 | orchestrator | 2025-06-22 20:01:31.248047 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-06-22 20:01:31.248057 | orchestrator | Sunday 22 June 2025 19:50:25 +0000 (0:00:00.889) 0:00:35.916 *********** 2025-06-22 20:01:31.248067 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.248076 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.248086 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.248095 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.248105 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.248114 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.248124 | orchestrator | 2025-06-22 20:01:31.248134 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-06-22 20:01:31.248172 | orchestrator | Sunday 22 June 2025 19:50:26 +0000 (0:00:01.082) 0:00:36.999 *********** 2025-06-22 20:01:31.248190 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.248207 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.248242 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.248252 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.248262 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.248272 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.248281 | orchestrator | 2025-06-22 20:01:31.248291 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-06-22 20:01:31.248301 | orchestrator | Sunday 22 June 2025 19:50:27 +0000 (0:00:00.930) 0:00:37.930 *********** 2025-06-22 20:01:31.248320 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ffee4eed--4396--59ea--b922--2a73e3bf4ca0-osd--block--ffee4eed--4396--59ea--b922--2a73e3bf4ca0', 'dm-uuid-LVM-q2pqJiTJaKBtTVURKf2CkZXsa09xcwZHqRoverYWgifQ0qT3WozkxbpGh0BMI5p0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-22 20:01:31.248340 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a67f9737--0c9f--5177--b2d5--f4c811291d8a-osd--block--a67f9737--0c9f--5177--b2d5--f4c811291d8a', 'dm-uuid-LVM-x3TzDx78V1LNDIMuCE2wpqhclwbaOD1boL3DxJZRH4sANkbSfFs0yaFLMzXYB0Md'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-22 20:01:31.248369 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:01:31.248381 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:01:31.248392 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:01:31.248402 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:01:31.248412 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:01:31.248422 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:01:31.248432 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:01:31.248448 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:01:31.248481 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2156dda8-7e6f-4624-a0c0-e6117c9c49b9', 'scsi-SQEMU_QEMU_HARDDISK_2156dda8-7e6f-4624-a0c0-e6117c9c49b9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2156dda8-7e6f-4624-a0c0-e6117c9c49b9-part1', 'scsi-SQEMU_QEMU_HARDDISK_2156dda8-7e6f-4624-a0c0-e6117c9c49b9-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2156dda8-7e6f-4624-a0c0-e6117c9c49b9-part14', 'scsi-SQEMU_QEMU_HARDDISK_2156dda8-7e6f-4624-a0c0-e6117c9c49b9-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2156dda8-7e6f-4624-a0c0-e6117c9c49b9-part15', 'scsi-SQEMU_QEMU_HARDDISK_2156dda8-7e6f-4624-a0c0-e6117c9c49b9-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2156dda8-7e6f-4624-a0c0-e6117c9c49b9-part16', 'scsi-SQEMU_QEMU_HARDDISK_2156dda8-7e6f-4624-a0c0-e6117c9c49b9-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 20:01:31.248502 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--ffee4eed--4396--59ea--b922--2a73e3bf4ca0-osd--block--ffee4eed--4396--59ea--b922--2a73e3bf4ca0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-IuxseD-Xfw3-r21F-YeXt-Y3UB-qXCE-FN37f3', 'scsi-0QEMU_QEMU_HARDDISK_78e15a4e-0b6b-4de0-bd2a-417fc55af8a3', 'scsi-SQEMU_QEMU_HARDDISK_78e15a4e-0b6b-4de0-bd2a-417fc55af8a3'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 20:01:31.248519 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--420ac1c2--ff56--5c56--8dd6--abe068aa03ad-osd--block--420ac1c2--ff56--5c56--8dd6--abe068aa03ad', 'dm-uuid-LVM-Y0kAmfU0NiERO7ebir8pzkuzt2pW1JrMxsV0ytwr7zHo2RM0YYQz16v8e9RQDgtI'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-22 20:01:31.248546 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--a67f9737--0c9f--5177--b2d5--f4c811291d8a-osd--block--a67f9737--0c9f--5177--b2d5--f4c811291d8a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-GsgXLL-fkdW-UDso-mQAJ-mxEf-DjtQ-wplTn3', 'scsi-0QEMU_QEMU_HARDDISK_0d04e2ba-3abe-44e6-a0ea-4a597e46ae81', 'scsi-SQEMU_QEMU_HARDDISK_0d04e2ba-3abe-44e6-a0ea-4a597e46ae81'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 20:01:31.248658 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--21b37dc5--48e7--5a6c--9835--121dab35d047-osd--block--21b37dc5--48e7--5a6c--9835--121dab35d047', 'dm-uuid-LVM-KGbfhTUxEBsRwgzYnyRGH4H2jMRLdkjPg0mCD0S4SqmIqU3H31pqPvAoyu6KWzoa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-22 20:01:31.248680 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_67ec265c-9b93-46b0-85f4-348a71cc884e', 'scsi-SQEMU_QEMU_HARDDISK_67ec265c-9b93-46b0-85f4-348a71cc884e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 20:01:31.248692 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:01:31.248703 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-22-19-10-40-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 20:01:31.248713 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:01:31.248723 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:01:31.248747 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:01:31.248757 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:01:31.248772 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:01:31.248782 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:01:31.248792 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.248809 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:01:31.248820 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_81f2e499-4268-4bd5-a5ff-46d49ba2fab9', 'scsi-SQEMU_QEMU_HARDDISK_81f2e499-4268-4bd5-a5ff-46d49ba2fab9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_81f2e499-4268-4bd5-a5ff-46d49ba2fab9-part1', 'scsi-SQEMU_QEMU_HARDDISK_81f2e499-4268-4bd5-a5ff-46d49ba2fab9-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_81f2e499-4268-4bd5-a5ff-46d49ba2fab9-part14', 'scsi-SQEMU_QEMU_HARDDISK_81f2e499-4268-4bd5-a5ff-46d49ba2fab9-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_81f2e499-4268-4bd5-a5ff-46d49ba2fab9-part15', 'scsi-SQEMU_QEMU_HARDDISK_81f2e499-4268-4bd5-a5ff-46d49ba2fab9-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_81f2e499-4268-4bd5-a5ff-46d49ba2fab9-part16', 'scsi-SQEMU_QEMU_HARDDISK_81f2e499-4268-4bd5-a5ff-46d49ba2fab9-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 20:01:31.248838 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--420ac1c2--ff56--5c56--8dd6--abe068aa03ad-osd--block--420ac1c2--ff56--5c56--8dd6--abe068aa03ad'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-sbJf2j-kvL6-D9Rj-g0g6-1ewV-GQ1i-ToEso7', 'scsi-0QEMU_QEMU_HARDDISK_1702d6d9-f6d5-467e-9c44-3c93c3ac891d', 'scsi-SQEMU_QEMU_HARDDISK_1702d6d9-f6d5-467e-9c44-3c93c3ac891d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 20:01:31.248853 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--21b37dc5--48e7--5a6c--9835--121dab35d047-osd--block--21b37dc5--48e7--5a6c--9835--121dab35d047'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-npdezT-nudS-n4QM-0RVM-No3f-37Al-1TAEZN', 'scsi-0QEMU_QEMU_HARDDISK_a49b6e77-acd0-4f36-887b-4e4ec75cdfa4', 'scsi-SQEMU_QEMU_HARDDISK_a49b6e77-acd0-4f36-887b-4e4ec75cdfa4'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 20:01:31.248871 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bbdef6ad-891d-4656-ac9b-bc24d19b561e', 'scsi-SQEMU_QEMU_HARDDISK_bbdef6ad-891d-4656-ac9b-bc24d19b561e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 20:01:31.248881 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-22-19-10-42-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 20:01:31.248892 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3108d6cc--64da--58c4--8e22--262ec3caa421-osd--block--3108d6cc--64da--58c4--8e22--262ec3caa421', 'dm-uuid-LVM-e5DdA2Z5zV4nVqLZXYU1m9FdEuPJHovfpODsitfcXz282rKkjlJ6PtJholW3GnT0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-22 20:01:31.248908 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--39fb6ae0--c3e6--59b9--8b54--9251bb7c5136-osd--block--39fb6ae0--c3e6--59b9--8b54--9251bb7c5136', 'dm-uuid-LVM-QvjYzXAhLLqss15EXzQLnxByVXB2B3Avm21BSOWD1Pj2v7DyYWPJ4bc0YTU2RwoR'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-22 20:01:31.248918 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:01:31.248929 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:01:31.248939 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.248953 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:01:31.248963 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:01:31.248980 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:01:31.248990 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:01:31.249000 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:01:31.249010 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:01:31.249026 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:01:31.249036 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:01:31.249058 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_abc8cf8e-645f-44ba-8ef9-2fedd7dd22d1', 'scsi-SQEMU_QEMU_HARDDISK_abc8cf8e-645f-44ba-8ef9-2fedd7dd22d1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_abc8cf8e-645f-44ba-8ef9-2fedd7dd22d1-part1', 'scsi-SQEMU_QEMU_HARDDISK_abc8cf8e-645f-44ba-8ef9-2fedd7dd22d1-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_abc8cf8e-645f-44ba-8ef9-2fedd7dd22d1-part14', 'scsi-SQEMU_QEMU_HARDDISK_abc8cf8e-645f-44ba-8ef9-2fedd7dd22d1-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_abc8cf8e-645f-44ba-8ef9-2fedd7dd22d1-part15', 'scsi-SQEMU_QEMU_HARDDISK_abc8cf8e-645f-44ba-8ef9-2fedd7dd22d1-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_abc8cf8e-645f-44ba-8ef9-2fedd7dd22d1-part16', 'scsi-SQEMU_QEMU_HARDDISK_abc8cf8e-645f-44ba-8ef9-2fedd7dd22d1-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 20:01:31.249070 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:01:31.249080 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:01:31.249096 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--3108d6cc--64da--58c4--8e22--262ec3caa421-osd--block--3108d6cc--64da--58c4--8e22--262ec3caa421'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-znpq1l-mumb-iL0o-0L13-f9Ix-i3q0-hh06ar', 'scsi-0QEMU_QEMU_HARDDISK_b25991b3-37fd-407a-b13b-d136271ca727', 'scsi-SQEMU_QEMU_HARDDISK_b25991b3-37fd-407a-b13b-d136271ca727'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 20:01:31.249107 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:01:31.249117 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--39fb6ae0--c3e6--59b9--8b54--9251bb7c5136-osd--block--39fb6ae0--c3e6--59b9--8b54--9251bb7c5136'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-hjCmUu-UDSS-ItuD-E86h-K8ZU-GPJi-vSzINW', 'scsi-0QEMU_QEMU_HARDDISK_71e43d47-057b-4609-853f-9ccf72c5a295', 'scsi-SQEMU_QEMU_HARDDISK_71e43d47-057b-4609-853f-9ccf72c5a295'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 20:01:31.249132 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:01:31.249172 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_61868cbd-84da-463e-9017-284301fda41a', 'scsi-SQEMU_QEMU_HARDDISK_61868cbd-84da-463e-9017-284301fda41a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 20:01:31.249184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:01:31.249194 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:01:31.249213 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-22-19-10-47-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 20:01:31.249229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b905425-1e63-4e6b-b5e5-61336c9bc17c', 'scsi-SQEMU_QEMU_HARDDISK_7b905425-1e63-4e6b-b5e5-61336c9bc17c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b905425-1e63-4e6b-b5e5-61336c9bc17c-part1', 'scsi-SQEMU_QEMU_HARDDISK_7b905425-1e63-4e6b-b5e5-61336c9bc17c-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b905425-1e63-4e6b-b5e5-61336c9bc17c-part14', 'scsi-SQEMU_QEMU_HARDDISK_7b905425-1e63-4e6b-b5e5-61336c9bc17c-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b905425-1e63-4e6b-b5e5-61336c9bc17c-part15', 'scsi-SQEMU_QEMU_HARDDISK_7b905425-1e63-4e6b-b5e5-61336c9bc17c-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b905425-1e63-4e6b-b5e5-61336c9bc17c-part16', 'scsi-SQEMU_QEMU_HARDDISK_7b905425-1e63-4e6b-b5e5-61336c9bc17c-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 20:01:31.249240 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.249255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-22-19-10-44-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 20:01:31.249266 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:01:31.249276 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:01:31.249292 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:01:31.249302 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:01:31.249312 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:01:31.249322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:01:31.249332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:01:31.249347 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:01:31.249357 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.249377 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a9b05ce-765c-474d-953e-4ab57c149179', 'scsi-SQEMU_QEMU_HARDDISK_1a9b05ce-765c-474d-953e-4ab57c149179'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a9b05ce-765c-474d-953e-4ab57c149179-part1', 'scsi-SQEMU_QEMU_HARDDISK_1a9b05ce-765c-474d-953e-4ab57c149179-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a9b05ce-765c-474d-953e-4ab57c149179-part14', 'scsi-SQEMU_QEMU_HARDDISK_1a9b05ce-765c-474d-953e-4ab57c149179-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a9b05ce-765c-474d-953e-4ab57c149179-part15', 'scsi-SQEMU_QEMU_HARDDISK_1a9b05ce-765c-474d-953e-4ab57c149179-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a9b05ce-765c-474d-953e-4ab57c149179-part16', 'scsi-SQEMU_QEMU_HARDDISK_1a9b05ce-765c-474d-953e-4ab57c149179-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 20:01:31.249394 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-22-19-10-43-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 20:01:31.249405 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.249415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:01:31.249425 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:01:31.249442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:01:31.249453 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:01:31.249472 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:01:31.249490 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:01:31.249514 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:01:31.249532 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:01:31.249553 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_807aa02f-11b0-4381-a55e-c1f77ace1900', 'scsi-SQEMU_QEMU_HARDDISK_807aa02f-11b0-4381-a55e-c1f77ace1900'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_807aa02f-11b0-4381-a55e-c1f77ace1900-part1', 'scsi-SQEMU_QEMU_HARDDISK_807aa02f-11b0-4381-a55e-c1f77ace1900-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_807aa02f-11b0-4381-a55e-c1f77ace1900-part14', 'scsi-SQEMU_QEMU_HARDDISK_807aa02f-11b0-4381-a55e-c1f77ace1900-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_807aa02f-11b0-4381-a55e-c1f77ace1900-part15', 'scsi-SQEMU_QEMU_HARDDISK_807aa02f-11b0-4381-a55e-c1f77ace1900-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_807aa02f-11b0-4381-a55e-c1f77ace1900-part16', 'scsi-SQEMU_QEMU_HARDDISK_807aa02f-11b0-4381-a55e-c1f77ace1900-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 20:01:31.249571 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-22-19-10-41-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 20:01:31.249581 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.249597 | orchestrator | 2025-06-22 20:01:31.249607 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-06-22 20:01:31.249617 | orchestrator | Sunday 22 June 2025 19:50:29 +0000 (0:00:01.917) 0:00:39.848 *********** 2025-06-22 20:01:31.249628 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ffee4eed--4396--59ea--b922--2a73e3bf4ca0-osd--block--ffee4eed--4396--59ea--b922--2a73e3bf4ca0', 'dm-uuid-LVM-q2pqJiTJaKBtTVURKf2CkZXsa09xcwZHqRoverYWgifQ0qT3WozkxbpGh0BMI5p0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:31.249639 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a67f9737--0c9f--5177--b2d5--f4c811291d8a-osd--block--a67f9737--0c9f--5177--b2d5--f4c811291d8a', 'dm-uuid-LVM-x3TzDx78V1LNDIMuCE2wpqhclwbaOD1boL3DxJZRH4sANkbSfFs0yaFLMzXYB0Md'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:31.249649 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:31.249660 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:31.249674 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:31.249690 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:31.249706 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:31.249717 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:31.249727 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:31.249737 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--420ac1c2--ff56--5c56--8dd6--abe068aa03ad-osd--block--420ac1c2--ff56--5c56--8dd6--abe068aa03ad', 'dm-uuid-LVM-Y0kAmfU0NiERO7ebir8pzkuzt2pW1JrMxsV0ytwr7zHo2RM0YYQz16v8e9RQDgtI'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:31.249752 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:31.249767 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--21b37dc5--48e7--5a6c--9835--121dab35d047-osd--block--21b37dc5--48e7--5a6c--9835--121dab35d047', 'dm-uuid-LVM-KGbfhTUxEBsRwgzYnyRGH4H2jMRLdkjPg0mCD0S4SqmIqU3H31pqPvAoyu6KWzoa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:31.249784 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2156dda8-7e6f-4624-a0c0-e6117c9c49b9', 'scsi-SQEMU_QEMU_HARDDISK_2156dda8-7e6f-4624-a0c0-e6117c9c49b9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2156dda8-7e6f-4624-a0c0-e6117c9c49b9-part1', 'scsi-SQEMU_QEMU_HARDDISK_2156dda8-7e6f-4624-a0c0-e6117c9c49b9-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2156dda8-7e6f-4624-a0c0-e6117c9c49b9-part14', 'scsi-SQEMU_QEMU_HARDDISK_2156dda8-7e6f-4624-a0c0-e6117c9c49b9-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2156dda8-7e6f-4624-a0c0-e6117c9c49b9-part15', 'scsi-SQEMU_QEMU_HARDDISK_2156dda8-7e6f-4624-a0c0-e6117c9c49b9-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2156dda8-7e6f-4624-a0c0-e6117c9c49b9-part16', 'scsi-SQEMU_QEMU_HARDDISK_2156dda8-7e6f-4624-a0c0-e6117c9c49b9-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:31.249795 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:31.249811 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--ffee4eed--4396--59ea--b922--2a73e3bf4ca0-osd--block--ffee4eed--4396--59ea--b922--2a73e3bf4ca0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-IuxseD-Xfw3-r21F-YeXt-Y3UB-qXCE-FN37f3', 'scsi-0QEMU_QEMU_HARDDISK_78e15a4e-0b6b-4de0-bd2a-417fc55af8a3', 'scsi-SQEMU_QEMU_HARDDISK_78e15a4e-0b6b-4de0-bd2a-417fc55af8a3'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:31.249832 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:31.249843 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--a67f9737--0c9f--5177--b2d5--f4c811291d8a-osd--block--a67f9737--0c9f--5177--b2d5--f4c811291d8a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-GsgXLL-fkdW-UDso-mQAJ-mxEf-DjtQ-wplTn3', 'scsi-0QEMU_QEMU_HARDDISK_0d04e2ba-3abe-44e6-a0ea-4a597e46ae81', 'scsi-SQEMU_QEMU_HARDDISK_0d04e2ba-3abe-44e6-a0ea-4a597e46ae81'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:31.249854 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3108d6cc--64da--58c4--8e22--262ec3caa421-osd--block--3108d6cc--64da--58c4--8e22--262ec3caa421', 'dm-uuid-LVM-e5DdA2Z5zV4nVqLZXYU1m9FdEuPJHovfpODsitfcXz282rKkjlJ6PtJholW3GnT0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:31.249864 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:31.249874 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--39fb6ae0--c3e6--59b9--8b54--9251bb7c5136-osd--block--39fb6ae0--c3e6--59b9--8b54--9251bb7c5136', 'dm-uuid-LVM-QvjYzXAhLLqss15EXzQLnxByVXB2B3Avm21BSOWD1Pj2v7DyYWPJ4bc0YTU2RwoR'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:31.249896 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_67ec265c-9b93-46b0-85f4-348a71cc884e', 'scsi-SQEMU_QEMU_HARDDISK_67ec265c-9b93-46b0-85f4-348a71cc884e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:31.249907 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:31.249917 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:31.249962 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-22-19-10-40-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:31.249974 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:31.249988 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:31.250274 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:31.250299 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:31.250309 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:31.250319 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:31.250327 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:31.250336 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:31.250350 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:31.250377 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:31.250386 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:31.250395 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:31.250404 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:31.250413 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:31.250425 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:31.250439 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:31.250455 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b905425-1e63-4e6b-b5e5-61336c9bc17c', 'scsi-SQEMU_QEMU_HARDDISK_7b905425-1e63-4e6b-b5e5-61336c9bc17c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b905425-1e63-4e6b-b5e5-61336c9bc17c-part1', 'scsi-SQEMU_QEMU_HARDDISK_7b905425-1e63-4e6b-b5e5-61336c9bc17c-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b905425-1e63-4e6b-b5e5-61336c9bc17c-part14', 'scsi-SQEMU_QEMU_HARDDISK_7b905425-1e63-4e6b-b5e5-61336c9bc17c-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b905425-1e63-4e6b-b5e5-61336c9bc17c-part15', 'scsi-SQEMU_QEMU_HARDDISK_7b905425-1e63-4e6b-b5e5-61336c9bc17c-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_7b905425-1e63-4e6b-b5e5-61336c9bc17c-part16', 'scsi-SQEMU_QEMU_HARDDISK_7b905425-1e63-4e6b-b5e5-61336c9bc17c-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:31.250467 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-22-19-10-44-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:31.250494 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:31.250514 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:31.250527 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:31.250541 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_81f2e499-4268-4bd5-a5ff-46d49ba2fab9', 'scsi-SQEMU_QEMU_HARDDISK_81f2e499-4268-4bd5-a5ff-46d49ba2fab9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_81f2e499-4268-4bd5-a5ff-46d49ba2fab9-part1', 'scsi-SQEMU_QEMU_HARDDISK_81f2e499-4268-4bd5-a5ff-46d49ba2fab9-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_81f2e499-4268-4bd5-a5ff-46d49ba2fab9-part14', 'scsi-SQEMU_QEMU_HARDDISK_81f2e499-4268-4bd5-a5ff-46d49ba2fab9-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_81f2e499-4268-4bd5-a5ff-46d49ba2fab9-part15', 'scsi-SQEMU_QEMU_HARDDISK_81f2e499-4268-4bd5-a5ff-46d49ba2fab9-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_81f2e499-4268-4bd5-a5ff-46d49ba2fab9-part16', 'scsi-SQEMU_QEMU_HARDDISK_81f2e499-4268-4bd5-a5ff-46d49ba2fab9-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:31.250580 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_abc8cf8e-645f-44ba-8ef9-2fedd7dd22d1', 'scsi-SQEMU_QEMU_HARDDISK_abc8cf8e-645f-44ba-8ef9-2fedd7dd22d1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_abc8cf8e-645f-44ba-8ef9-2fedd7dd22d1-part1', 'scsi-SQEMU_QEMU_HARDDISK_abc8cf8e-645f-44ba-8ef9-2fedd7dd22d1-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_abc8cf8e-645f-44ba-8ef9-2fedd7dd22d1-part14', 'scsi-SQEMU_QEMU_HARDDISK_abc8cf8e-645f-44ba-8ef9-2fedd7dd22d1-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_abc8cf8e-645f-44ba-8ef9-2fedd7dd22d1-part15', 'scsi-SQEMU_QEMU_HARDDISK_abc8cf8e-645f-44ba-8ef9-2fedd7dd22d1-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_abc8cf8e-645f-44ba-8ef9-2fedd7dd22d1-part16', 'scsi-SQEMU_QEMU_HARDDISK_abc8cf8e-645f-44ba-8ef9-2fedd7dd22d1-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:31.250596 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--420ac1c2--ff56--5c56--8dd6--abe068aa03ad-osd--block--420ac1c2--ff56--5c56--8dd6--abe068aa03ad'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-sbJf2j-kvL6-D9Rj-g0g6-1ewV-GQ1i-ToEso7', 'scsi-0QEMU_QEMU_HARDDISK_1702d6d9-f6d5-467e-9c44-3c93c3ac891d', 'scsi-SQEMU_QEMU_HARDDISK_1702d6d9-f6d5-467e-9c44-3c93c3ac891d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:31.250606 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--3108d6cc--64da--58c4--8e22--262ec3caa421-osd--block--3108d6cc--64da--58c4--8e22--262ec3caa421'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-znpq1l-mumb-iL0o-0L13-f9Ix-i3q0-hh06ar', 'scsi-0QEMU_QEMU_HARDDISK_b25991b3-37fd-407a-b13b-d136271ca727', 'scsi-SQEMU_QEMU_HARDDISK_b25991b3-37fd-407a-b13b-d136271ca727'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:31.250624 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--39fb6ae0--c3e6--59b9--8b54--9251bb7c5136-osd--block--39fb6ae0--c3e6--59b9--8b54--9251bb7c5136'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-hjCmUu-UDSS-ItuD-E86h-K8ZU-GPJi-vSzINW', 'scsi-0QEMU_QEMU_HARDDISK_71e43d47-057b-4609-853f-9ccf72c5a295', 'scsi-SQEMU_QEMU_HARDDISK_71e43d47-057b-4609-853f-9ccf72c5a295'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:31.250637 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--21b37dc5--48e7--5a6c--9835--121dab35d047-osd--block--21b37dc5--48e7--5a6c--9835--121dab35d047'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-npdezT-nudS-n4QM-0RVM-No3f-37Al-1TAEZN', 'scsi-0QEMU_QEMU_HARDDISK_a49b6e77-acd0-4f36-887b-4e4ec75cdfa4', 'scsi-SQEMU_QEMU_HARDDISK_a49b6e77-acd0-4f36-887b-4e4ec75cdfa4'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:31.250646 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_61868cbd-84da-463e-9017-284301fda41a', 'scsi-SQEMU_QEMU_HARDDISK_61868cbd-84da-463e-9017-284301fda41a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:31.250655 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bbdef6ad-891d-4656-ac9b-bc24d19b561e', 'scsi-SQEMU_QEMU_HARDDISK_bbdef6ad-891d-4656-ac9b-bc24d19b561e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:31.250663 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-22-19-10-47-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:31.250676 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.250692 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-22-19-10-42-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:31.250701 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:31.250709 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:31.250718 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:31.250726 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:31.250739 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.250748 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:31.250756 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.250764 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.250775 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:31.250790 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:31.250801 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:31.250811 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a9b05ce-765c-474d-953e-4ab57c149179', 'scsi-SQEMU_QEMU_HARDDISK_1a9b05ce-765c-474d-953e-4ab57c149179'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a9b05ce-765c-474d-953e-4ab57c149179-part1', 'scsi-SQEMU_QEMU_HARDDISK_1a9b05ce-765c-474d-953e-4ab57c149179-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a9b05ce-765c-474d-953e-4ab57c149179-part14', 'scsi-SQEMU_QEMU_HARDDISK_1a9b05ce-765c-474d-953e-4ab57c149179-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a9b05ce-765c-474d-953e-4ab57c149179-part15', 'scsi-SQEMU_QEMU_HARDDISK_1a9b05ce-765c-474d-953e-4ab57c149179-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_1a9b05ce-765c-474d-953e-4ab57c149179-part16', 'scsi-SQEMU_QEMU_HARDDISK_1a9b05ce-765c-474d-953e-4ab57c149179-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:31.250830 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-22-19-10-43-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:31.250839 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.250854 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:31.250863 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:31.250873 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:31.250882 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:31.250900 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:31.250914 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:31.250928 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:31.250937 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:31.250947 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_807aa02f-11b0-4381-a55e-c1f77ace1900', 'scsi-SQEMU_QEMU_HARDDISK_807aa02f-11b0-4381-a55e-c1f77ace1900'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_807aa02f-11b0-4381-a55e-c1f77ace1900-part1', 'scsi-SQEMU_QEMU_HARDDISK_807aa02f-11b0-4381-a55e-c1f77ace1900-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_807aa02f-11b0-4381-a55e-c1f77ace1900-part14', 'scsi-SQEMU_QEMU_HARDDISK_807aa02f-11b0-4381-a55e-c1f77ace1900-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_807aa02f-11b0-4381-a55e-c1f77ace1900-part15', 'scsi-SQEMU_QEMU_HARDDISK_807aa02f-11b0-4381-a55e-c1f77ace1900-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_807aa02f-11b0-4381-a55e-c1f77ace1900-part16', 'scsi-SQEMU_QEMU_HARDDISK_807aa02f-11b0-4381-a55e-c1f77ace1900-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:31.250966 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-22-19-10-41-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:01:31.250975 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.250984 | orchestrator | 2025-06-22 20:01:31.250993 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-06-22 20:01:31.251002 | orchestrator | Sunday 22 June 2025 19:50:31 +0000 (0:00:01.907) 0:00:41.755 *********** 2025-06-22 20:01:31.251015 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:31.251025 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:31.251034 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:31.251042 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:01:31.251051 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:01:31.251059 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:01:31.251068 | orchestrator | 2025-06-22 20:01:31.251077 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-06-22 20:01:31.251086 | orchestrator | Sunday 22 June 2025 19:50:32 +0000 (0:00:01.410) 0:00:43.166 *********** 2025-06-22 20:01:31.251094 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:31.251103 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:31.251111 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:31.251120 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:01:31.251129 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:01:31.251156 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:01:31.251166 | orchestrator | 2025-06-22 20:01:31.251174 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-06-22 20:01:31.251182 | orchestrator | Sunday 22 June 2025 19:50:33 +0000 (0:00:00.814) 0:00:43.981 *********** 2025-06-22 20:01:31.251190 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.251198 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.251206 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.251214 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.251221 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.251229 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.251237 | orchestrator | 2025-06-22 20:01:31.251245 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-06-22 20:01:31.251258 | orchestrator | Sunday 22 June 2025 19:50:34 +0000 (0:00:00.833) 0:00:44.814 *********** 2025-06-22 20:01:31.251266 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.251274 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.251282 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.251289 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.251297 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.251305 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.251312 | orchestrator | 2025-06-22 20:01:31.251320 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-06-22 20:01:31.251328 | orchestrator | Sunday 22 June 2025 19:50:34 +0000 (0:00:00.676) 0:00:45.491 *********** 2025-06-22 20:01:31.251336 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.251344 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.251352 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.251359 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.251371 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.251387 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.251406 | orchestrator | 2025-06-22 20:01:31.251418 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-06-22 20:01:31.251430 | orchestrator | Sunday 22 June 2025 19:50:35 +0000 (0:00:00.894) 0:00:46.385 *********** 2025-06-22 20:01:31.251442 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.251454 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.251466 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.251478 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.251490 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.251502 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.251514 | orchestrator | 2025-06-22 20:01:31.251527 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-06-22 20:01:31.251541 | orchestrator | Sunday 22 June 2025 19:50:36 +0000 (0:00:00.730) 0:00:47.116 *********** 2025-06-22 20:01:31.251554 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-06-22 20:01:31.251569 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-06-22 20:01:31.251578 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-06-22 20:01:31.251586 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-06-22 20:01:31.251594 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-06-22 20:01:31.251602 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-06-22 20:01:31.251610 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-22 20:01:31.251618 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-06-22 20:01:31.251626 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-06-22 20:01:31.251634 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-06-22 20:01:31.251641 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-06-22 20:01:31.251649 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-06-22 20:01:31.251657 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-06-22 20:01:31.251665 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-06-22 20:01:31.251673 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-06-22 20:01:31.251681 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-06-22 20:01:31.251688 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-06-22 20:01:31.251696 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-06-22 20:01:31.251704 | orchestrator | 2025-06-22 20:01:31.251717 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-06-22 20:01:31.251726 | orchestrator | Sunday 22 June 2025 19:50:38 +0000 (0:00:02.385) 0:00:49.502 *********** 2025-06-22 20:01:31.251733 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-06-22 20:01:31.251742 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-06-22 20:01:31.251758 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-06-22 20:01:31.251766 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.251774 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-06-22 20:01:31.251782 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-06-22 20:01:31.251790 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-06-22 20:01:31.251798 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.251805 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-06-22 20:01:31.251813 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-06-22 20:01:31.251828 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-06-22 20:01:31.251836 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.251844 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-22 20:01:31.251852 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-22 20:01:31.251860 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-22 20:01:31.251868 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.251875 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-06-22 20:01:31.251883 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-06-22 20:01:31.251891 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-06-22 20:01:31.251899 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.251907 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-06-22 20:01:31.251914 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-06-22 20:01:31.251922 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-06-22 20:01:31.251930 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.251938 | orchestrator | 2025-06-22 20:01:31.251946 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-06-22 20:01:31.251954 | orchestrator | Sunday 22 June 2025 19:50:39 +0000 (0:00:00.577) 0:00:50.079 *********** 2025-06-22 20:01:31.251961 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.251969 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.251977 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.251985 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 20:01:31.251993 | orchestrator | 2025-06-22 20:01:31.252001 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-06-22 20:01:31.252009 | orchestrator | Sunday 22 June 2025 19:50:40 +0000 (0:00:00.878) 0:00:50.957 *********** 2025-06-22 20:01:31.252017 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.252025 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.252033 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.252041 | orchestrator | 2025-06-22 20:01:31.252049 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-06-22 20:01:31.252056 | orchestrator | Sunday 22 June 2025 19:50:40 +0000 (0:00:00.307) 0:00:51.265 *********** 2025-06-22 20:01:31.252064 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.252072 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.252080 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.252087 | orchestrator | 2025-06-22 20:01:31.252095 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-06-22 20:01:31.252103 | orchestrator | Sunday 22 June 2025 19:50:41 +0000 (0:00:00.414) 0:00:51.679 *********** 2025-06-22 20:01:31.252111 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.252119 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.252126 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.252134 | orchestrator | 2025-06-22 20:01:31.252157 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-06-22 20:01:31.252165 | orchestrator | Sunday 22 June 2025 19:50:41 +0000 (0:00:00.294) 0:00:51.974 *********** 2025-06-22 20:01:31.252178 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:31.252186 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:31.252194 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:31.252202 | orchestrator | 2025-06-22 20:01:31.252210 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-06-22 20:01:31.252217 | orchestrator | Sunday 22 June 2025 19:50:41 +0000 (0:00:00.373) 0:00:52.347 *********** 2025-06-22 20:01:31.252225 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-22 20:01:31.252233 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-22 20:01:31.252241 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-22 20:01:31.252249 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.252256 | orchestrator | 2025-06-22 20:01:31.252264 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-06-22 20:01:31.252272 | orchestrator | Sunday 22 June 2025 19:50:42 +0000 (0:00:00.379) 0:00:52.726 *********** 2025-06-22 20:01:31.252280 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-22 20:01:31.252288 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-22 20:01:31.252296 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-22 20:01:31.252303 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.252311 | orchestrator | 2025-06-22 20:01:31.252319 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-06-22 20:01:31.252327 | orchestrator | Sunday 22 June 2025 19:50:42 +0000 (0:00:00.603) 0:00:53.329 *********** 2025-06-22 20:01:31.252335 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-22 20:01:31.252346 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-22 20:01:31.252354 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-22 20:01:31.252362 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.252370 | orchestrator | 2025-06-22 20:01:31.252378 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-06-22 20:01:31.252386 | orchestrator | Sunday 22 June 2025 19:50:43 +0000 (0:00:00.517) 0:00:53.846 *********** 2025-06-22 20:01:31.252393 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:31.252401 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:31.252409 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:31.252417 | orchestrator | 2025-06-22 20:01:31.252425 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-06-22 20:01:31.252433 | orchestrator | Sunday 22 June 2025 19:50:43 +0000 (0:00:00.446) 0:00:54.293 *********** 2025-06-22 20:01:31.252440 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-06-22 20:01:31.252449 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-06-22 20:01:31.252457 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-06-22 20:01:31.252464 | orchestrator | 2025-06-22 20:01:31.252476 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-06-22 20:01:31.252484 | orchestrator | Sunday 22 June 2025 19:50:44 +0000 (0:00:00.634) 0:00:54.927 *********** 2025-06-22 20:01:31.252492 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-22 20:01:31.252500 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-22 20:01:31.252508 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-22 20:01:31.252516 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-06-22 20:01:31.252524 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-06-22 20:01:31.252532 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-06-22 20:01:31.252540 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-06-22 20:01:31.252548 | orchestrator | 2025-06-22 20:01:31.252556 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-06-22 20:01:31.252569 | orchestrator | Sunday 22 June 2025 19:50:45 +0000 (0:00:00.855) 0:00:55.782 *********** 2025-06-22 20:01:31.252577 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-22 20:01:31.252584 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-22 20:01:31.252592 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-22 20:01:31.252600 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-06-22 20:01:31.252608 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-06-22 20:01:31.252616 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-06-22 20:01:31.252624 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-06-22 20:01:31.252632 | orchestrator | 2025-06-22 20:01:31.252640 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-22 20:01:31.252648 | orchestrator | Sunday 22 June 2025 19:50:47 +0000 (0:00:02.161) 0:00:57.944 *********** 2025-06-22 20:01:31.252656 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:01:31.252664 | orchestrator | 2025-06-22 20:01:31.252672 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-22 20:01:31.252679 | orchestrator | Sunday 22 June 2025 19:50:48 +0000 (0:00:01.323) 0:00:59.268 *********** 2025-06-22 20:01:31.252687 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:01:31.252695 | orchestrator | 2025-06-22 20:01:31.252703 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-22 20:01:31.252711 | orchestrator | Sunday 22 June 2025 19:50:49 +0000 (0:00:01.354) 0:01:00.622 *********** 2025-06-22 20:01:31.252719 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.252727 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.252735 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.252742 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:01:31.252750 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:01:31.252758 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:01:31.252766 | orchestrator | 2025-06-22 20:01:31.252774 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-22 20:01:31.252782 | orchestrator | Sunday 22 June 2025 19:50:51 +0000 (0:00:01.627) 0:01:02.249 *********** 2025-06-22 20:01:31.252789 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.252797 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.252805 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:31.252813 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.252820 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:31.252828 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:31.252836 | orchestrator | 2025-06-22 20:01:31.252844 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-22 20:01:31.252852 | orchestrator | Sunday 22 June 2025 19:50:52 +0000 (0:00:01.159) 0:01:03.409 *********** 2025-06-22 20:01:31.252859 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:31.252867 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:31.252875 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.252883 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.252891 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:31.252898 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.252906 | orchestrator | 2025-06-22 20:01:31.252914 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-22 20:01:31.252925 | orchestrator | Sunday 22 June 2025 19:50:54 +0000 (0:00:01.317) 0:01:04.727 *********** 2025-06-22 20:01:31.252933 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:31.252941 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.252954 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:31.252962 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.252969 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.252977 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:31.252985 | orchestrator | 2025-06-22 20:01:31.252993 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-22 20:01:31.253001 | orchestrator | Sunday 22 June 2025 19:50:55 +0000 (0:00:01.009) 0:01:05.736 *********** 2025-06-22 20:01:31.253009 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.253017 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.253024 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.253032 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:01:31.253040 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:01:31.253048 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:01:31.253056 | orchestrator | 2025-06-22 20:01:31.253064 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-22 20:01:31.253076 | orchestrator | Sunday 22 June 2025 19:50:57 +0000 (0:00:02.153) 0:01:07.889 *********** 2025-06-22 20:01:31.253084 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.253092 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.253100 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.253108 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.253116 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.253123 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.253131 | orchestrator | 2025-06-22 20:01:31.253179 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-22 20:01:31.253188 | orchestrator | Sunday 22 June 2025 19:50:57 +0000 (0:00:00.678) 0:01:08.568 *********** 2025-06-22 20:01:31.253196 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.253204 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.253212 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.253220 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.253228 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.253235 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.253243 | orchestrator | 2025-06-22 20:01:31.253251 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-22 20:01:31.253259 | orchestrator | Sunday 22 June 2025 19:50:58 +0000 (0:00:00.742) 0:01:09.310 *********** 2025-06-22 20:01:31.253267 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:31.253275 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:31.253283 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:31.253291 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:01:31.253298 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:01:31.253306 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:01:31.253314 | orchestrator | 2025-06-22 20:01:31.253322 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-22 20:01:31.253330 | orchestrator | Sunday 22 June 2025 19:50:59 +0000 (0:00:01.015) 0:01:10.326 *********** 2025-06-22 20:01:31.253338 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:31.253346 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:31.253354 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:31.253361 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:01:31.253369 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:01:31.253377 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:01:31.253385 | orchestrator | 2025-06-22 20:01:31.253393 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-22 20:01:31.253401 | orchestrator | Sunday 22 June 2025 19:51:01 +0000 (0:00:01.438) 0:01:11.764 *********** 2025-06-22 20:01:31.253474 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.253483 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.253490 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.253498 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.253506 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.253514 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.253529 | orchestrator | 2025-06-22 20:01:31.253537 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-22 20:01:31.253545 | orchestrator | Sunday 22 June 2025 19:51:01 +0000 (0:00:00.629) 0:01:12.393 *********** 2025-06-22 20:01:31.253553 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.253561 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.253568 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.253576 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:01:31.253584 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:01:31.253592 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:01:31.253600 | orchestrator | 2025-06-22 20:01:31.253608 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-22 20:01:31.253616 | orchestrator | Sunday 22 June 2025 19:51:02 +0000 (0:00:01.069) 0:01:13.463 *********** 2025-06-22 20:01:31.253624 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:31.253632 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:31.253639 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:31.253647 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.253655 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.253663 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.253671 | orchestrator | 2025-06-22 20:01:31.253679 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-22 20:01:31.253686 | orchestrator | Sunday 22 June 2025 19:51:03 +0000 (0:00:00.921) 0:01:14.384 *********** 2025-06-22 20:01:31.253693 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:31.253700 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:31.253707 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:31.253713 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.253720 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.253727 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.253733 | orchestrator | 2025-06-22 20:01:31.253740 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-22 20:01:31.253747 | orchestrator | Sunday 22 June 2025 19:51:04 +0000 (0:00:01.158) 0:01:15.542 *********** 2025-06-22 20:01:31.253753 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:31.253760 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:31.253766 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:31.253773 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.253780 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.253786 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.253793 | orchestrator | 2025-06-22 20:01:31.253804 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-22 20:01:31.253811 | orchestrator | Sunday 22 June 2025 19:51:05 +0000 (0:00:00.909) 0:01:16.452 *********** 2025-06-22 20:01:31.253818 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.253824 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.253831 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.253837 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.253844 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.253851 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.253857 | orchestrator | 2025-06-22 20:01:31.253864 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-22 20:01:31.253871 | orchestrator | Sunday 22 June 2025 19:51:06 +0000 (0:00:00.704) 0:01:17.156 *********** 2025-06-22 20:01:31.253878 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.253884 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.253891 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.253897 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.253904 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.253911 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.253917 | orchestrator | 2025-06-22 20:01:31.253929 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-22 20:01:31.253936 | orchestrator | Sunday 22 June 2025 19:51:07 +0000 (0:00:00.580) 0:01:17.737 *********** 2025-06-22 20:01:31.253947 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.253953 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.253960 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.253967 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:01:31.253974 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:01:31.253980 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:01:31.253987 | orchestrator | 2025-06-22 20:01:31.253994 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-22 20:01:31.254000 | orchestrator | Sunday 22 June 2025 19:51:07 +0000 (0:00:00.644) 0:01:18.381 *********** 2025-06-22 20:01:31.254007 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:31.254047 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:31.254054 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:31.254061 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:01:31.254067 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:01:31.254074 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:01:31.254081 | orchestrator | 2025-06-22 20:01:31.254088 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-22 20:01:31.254094 | orchestrator | Sunday 22 June 2025 19:51:08 +0000 (0:00:00.541) 0:01:18.923 *********** 2025-06-22 20:01:31.254101 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:31.254108 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:31.254114 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:31.254121 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:01:31.254127 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:01:31.254134 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:01:31.254153 | orchestrator | 2025-06-22 20:01:31.254160 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2025-06-22 20:01:31.254166 | orchestrator | Sunday 22 June 2025 19:51:09 +0000 (0:00:01.144) 0:01:20.067 *********** 2025-06-22 20:01:31.254173 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:01:31.254180 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:01:31.254187 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:01:31.254193 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:01:31.254200 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:01:31.254207 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:01:31.254214 | orchestrator | 2025-06-22 20:01:31.254220 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2025-06-22 20:01:31.254227 | orchestrator | Sunday 22 June 2025 19:51:11 +0000 (0:00:01.704) 0:01:21.772 *********** 2025-06-22 20:01:31.254234 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:01:31.254240 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:01:31.254247 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:01:31.254253 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:01:31.254260 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:01:31.254267 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:01:31.254273 | orchestrator | 2025-06-22 20:01:31.254280 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2025-06-22 20:01:31.254287 | orchestrator | Sunday 22 June 2025 19:51:13 +0000 (0:00:02.209) 0:01:23.981 *********** 2025-06-22 20:01:31.254294 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:01:31.254301 | orchestrator | 2025-06-22 20:01:31.254307 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2025-06-22 20:01:31.254314 | orchestrator | Sunday 22 June 2025 19:51:14 +0000 (0:00:01.201) 0:01:25.183 *********** 2025-06-22 20:01:31.254321 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.254327 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.254334 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.254340 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.254347 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.254354 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.254360 | orchestrator | 2025-06-22 20:01:31.254367 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2025-06-22 20:01:31.254385 | orchestrator | Sunday 22 June 2025 19:51:15 +0000 (0:00:00.773) 0:01:25.957 *********** 2025-06-22 20:01:31.254392 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.254399 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.254405 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.254412 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.254418 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.254425 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.254432 | orchestrator | 2025-06-22 20:01:31.254439 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2025-06-22 20:01:31.254445 | orchestrator | Sunday 22 June 2025 19:51:15 +0000 (0:00:00.480) 0:01:26.437 *********** 2025-06-22 20:01:31.254452 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-22 20:01:31.254459 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-22 20:01:31.254469 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-22 20:01:31.254476 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-22 20:01:31.254483 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-22 20:01:31.254490 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-22 20:01:31.254496 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-06-22 20:01:31.254503 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-22 20:01:31.254510 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-22 20:01:31.254517 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-22 20:01:31.254524 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-22 20:01:31.254545 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-06-22 20:01:31.254552 | orchestrator | 2025-06-22 20:01:31.254559 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2025-06-22 20:01:31.254565 | orchestrator | Sunday 22 June 2025 19:51:17 +0000 (0:00:01.465) 0:01:27.902 *********** 2025-06-22 20:01:31.254572 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:01:31.254579 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:01:31.254585 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:01:31.254592 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:01:31.254599 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:01:31.254605 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:01:31.254612 | orchestrator | 2025-06-22 20:01:31.254619 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2025-06-22 20:01:31.254625 | orchestrator | Sunday 22 June 2025 19:51:18 +0000 (0:00:00.907) 0:01:28.809 *********** 2025-06-22 20:01:31.254632 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.254638 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.254645 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.254652 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.254658 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.254888 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.254895 | orchestrator | 2025-06-22 20:01:31.254902 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2025-06-22 20:01:31.254909 | orchestrator | Sunday 22 June 2025 19:51:18 +0000 (0:00:00.693) 0:01:29.503 *********** 2025-06-22 20:01:31.254916 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.254922 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.254929 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.254935 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.254942 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.254957 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.254963 | orchestrator | 2025-06-22 20:01:31.254970 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2025-06-22 20:01:31.254977 | orchestrator | Sunday 22 June 2025 19:51:19 +0000 (0:00:00.513) 0:01:30.017 *********** 2025-06-22 20:01:31.254983 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.254990 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.254996 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.255003 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.255009 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.255016 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.255023 | orchestrator | 2025-06-22 20:01:31.255029 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2025-06-22 20:01:31.255036 | orchestrator | Sunday 22 June 2025 19:51:20 +0000 (0:00:00.747) 0:01:30.765 *********** 2025-06-22 20:01:31.255043 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:01:31.255050 | orchestrator | 2025-06-22 20:01:31.255057 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2025-06-22 20:01:31.255063 | orchestrator | Sunday 22 June 2025 19:51:21 +0000 (0:00:01.083) 0:01:31.848 *********** 2025-06-22 20:01:31.255070 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:31.255077 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:31.255083 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:01:31.255090 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:01:31.255097 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:01:31.255103 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:31.255110 | orchestrator | 2025-06-22 20:01:31.255117 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2025-06-22 20:01:31.255123 | orchestrator | Sunday 22 June 2025 19:52:34 +0000 (0:01:13.078) 0:02:44.927 *********** 2025-06-22 20:01:31.255130 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-22 20:01:31.255137 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-22 20:01:31.255184 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-22 20:01:31.255191 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.255198 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-22 20:01:31.255205 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-22 20:01:31.255212 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-22 20:01:31.255218 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.255225 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-22 20:01:31.255232 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-22 20:01:31.255238 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-22 20:01:31.255250 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.255257 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-22 20:01:31.255264 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-22 20:01:31.255270 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-22 20:01:31.255277 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.255284 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-22 20:01:31.255291 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-22 20:01:31.255297 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-22 20:01:31.255304 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.255316 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-06-22 20:01:31.255337 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-06-22 20:01:31.255345 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-06-22 20:01:31.255352 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.255359 | orchestrator | 2025-06-22 20:01:31.255365 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2025-06-22 20:01:31.255372 | orchestrator | Sunday 22 June 2025 19:52:35 +0000 (0:00:00.938) 0:02:45.866 *********** 2025-06-22 20:01:31.255379 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.255386 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.255392 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.255399 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.255406 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.255412 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.255419 | orchestrator | 2025-06-22 20:01:31.255426 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2025-06-22 20:01:31.255433 | orchestrator | Sunday 22 June 2025 19:52:35 +0000 (0:00:00.674) 0:02:46.541 *********** 2025-06-22 20:01:31.255441 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.255448 | orchestrator | 2025-06-22 20:01:31.255456 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2025-06-22 20:01:31.255464 | orchestrator | Sunday 22 June 2025 19:52:36 +0000 (0:00:00.227) 0:02:46.768 *********** 2025-06-22 20:01:31.255472 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.255479 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.255486 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.255494 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.255501 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.255509 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.255516 | orchestrator | 2025-06-22 20:01:31.255524 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2025-06-22 20:01:31.255531 | orchestrator | Sunday 22 June 2025 19:52:37 +0000 (0:00:00.893) 0:02:47.662 *********** 2025-06-22 20:01:31.255539 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.255547 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.255554 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.255562 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.255569 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.255577 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.255583 | orchestrator | 2025-06-22 20:01:31.255590 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2025-06-22 20:01:31.255597 | orchestrator | Sunday 22 June 2025 19:52:37 +0000 (0:00:00.777) 0:02:48.439 *********** 2025-06-22 20:01:31.255604 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.255610 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.255617 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.255623 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.255630 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.255637 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.255643 | orchestrator | 2025-06-22 20:01:31.255650 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2025-06-22 20:01:31.255657 | orchestrator | Sunday 22 June 2025 19:52:38 +0000 (0:00:01.092) 0:02:49.532 *********** 2025-06-22 20:01:31.255663 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:31.255670 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:31.255677 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:31.255683 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:01:31.255690 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:01:31.255697 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:01:31.255704 | orchestrator | 2025-06-22 20:01:31.255710 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2025-06-22 20:01:31.255721 | orchestrator | Sunday 22 June 2025 19:52:41 +0000 (0:00:02.523) 0:02:52.055 *********** 2025-06-22 20:01:31.255727 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:31.255734 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:31.255740 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:31.255746 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:01:31.255752 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:01:31.255758 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:01:31.255764 | orchestrator | 2025-06-22 20:01:31.255770 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2025-06-22 20:01:31.255777 | orchestrator | Sunday 22 June 2025 19:52:42 +0000 (0:00:00.972) 0:02:53.028 *********** 2025-06-22 20:01:31.255783 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:01:31.255790 | orchestrator | 2025-06-22 20:01:31.255796 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2025-06-22 20:01:31.255802 | orchestrator | Sunday 22 June 2025 19:52:43 +0000 (0:00:01.590) 0:02:54.618 *********** 2025-06-22 20:01:31.255809 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.255815 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.255821 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.255827 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.255833 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.255840 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.255931 | orchestrator | 2025-06-22 20:01:31.255944 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2025-06-22 20:01:31.255950 | orchestrator | Sunday 22 June 2025 19:52:44 +0000 (0:00:00.908) 0:02:55.527 *********** 2025-06-22 20:01:31.255956 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.255963 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.255969 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.255975 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.255981 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.255988 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.255994 | orchestrator | 2025-06-22 20:01:31.256000 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2025-06-22 20:01:31.256007 | orchestrator | Sunday 22 June 2025 19:52:45 +0000 (0:00:01.041) 0:02:56.568 *********** 2025-06-22 20:01:31.256013 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.256019 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.256025 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.256031 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.256037 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.256055 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.256062 | orchestrator | 2025-06-22 20:01:31.256068 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2025-06-22 20:01:31.256075 | orchestrator | Sunday 22 June 2025 19:52:46 +0000 (0:00:00.736) 0:02:57.305 *********** 2025-06-22 20:01:31.256081 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.256087 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.256093 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.256099 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.256106 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.256112 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.256118 | orchestrator | 2025-06-22 20:01:31.256124 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2025-06-22 20:01:31.256130 | orchestrator | Sunday 22 June 2025 19:52:47 +0000 (0:00:01.128) 0:02:58.434 *********** 2025-06-22 20:01:31.256136 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.256155 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.256162 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.256168 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.256174 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.256186 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.256192 | orchestrator | 2025-06-22 20:01:31.256198 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2025-06-22 20:01:31.256204 | orchestrator | Sunday 22 June 2025 19:52:48 +0000 (0:00:00.732) 0:02:59.166 *********** 2025-06-22 20:01:31.256210 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.256217 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.256223 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.256229 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.256235 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.256241 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.256247 | orchestrator | 2025-06-22 20:01:31.256253 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2025-06-22 20:01:31.256260 | orchestrator | Sunday 22 June 2025 19:52:49 +0000 (0:00:01.058) 0:03:00.225 *********** 2025-06-22 20:01:31.256266 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.256272 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.256278 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.256284 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.256290 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.256297 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.256303 | orchestrator | 2025-06-22 20:01:31.256309 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2025-06-22 20:01:31.256315 | orchestrator | Sunday 22 June 2025 19:52:50 +0000 (0:00:00.796) 0:03:01.021 *********** 2025-06-22 20:01:31.256322 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.256328 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.256334 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.256340 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.256346 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.256352 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.256359 | orchestrator | 2025-06-22 20:01:31.256365 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2025-06-22 20:01:31.256371 | orchestrator | Sunday 22 June 2025 19:52:51 +0000 (0:00:01.073) 0:03:02.095 *********** 2025-06-22 20:01:31.256377 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:31.256384 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:31.256390 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:31.256396 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:01:31.256402 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:01:31.256408 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:01:31.256415 | orchestrator | 2025-06-22 20:01:31.256421 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2025-06-22 20:01:31.256427 | orchestrator | Sunday 22 June 2025 19:52:52 +0000 (0:00:01.397) 0:03:03.493 *********** 2025-06-22 20:01:31.256433 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:01:31.256440 | orchestrator | 2025-06-22 20:01:31.256446 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2025-06-22 20:01:31.256452 | orchestrator | Sunday 22 June 2025 19:52:54 +0000 (0:00:01.442) 0:03:04.935 *********** 2025-06-22 20:01:31.256458 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-06-22 20:01:31.256465 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-06-22 20:01:31.256471 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-06-22 20:01:31.256477 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-06-22 20:01:31.256483 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-06-22 20:01:31.256490 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-06-22 20:01:31.256496 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-06-22 20:01:31.256502 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-06-22 20:01:31.256512 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-06-22 20:01:31.256523 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-06-22 20:01:31.256529 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-06-22 20:01:31.256535 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-06-22 20:01:31.256541 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-06-22 20:01:31.256548 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-06-22 20:01:31.256554 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-06-22 20:01:31.256560 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-06-22 20:01:31.256566 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-06-22 20:01:31.256573 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-06-22 20:01:31.256579 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-06-22 20:01:31.256585 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-06-22 20:01:31.256602 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-06-22 20:01:31.256609 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-06-22 20:01:31.256615 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-06-22 20:01:31.256621 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-06-22 20:01:31.256627 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-06-22 20:01:31.256633 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-06-22 20:01:31.256640 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-06-22 20:01:31.256646 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-06-22 20:01:31.256652 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-06-22 20:01:31.256658 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-06-22 20:01:31.256664 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-06-22 20:01:31.256670 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-06-22 20:01:31.256676 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2025-06-22 20:01:31.256683 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-06-22 20:01:31.256689 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2025-06-22 20:01:31.256695 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-06-22 20:01:31.256701 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-06-22 20:01:31.256707 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2025-06-22 20:01:31.256714 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-06-22 20:01:31.256720 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2025-06-22 20:01:31.256726 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-06-22 20:01:31.256733 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-06-22 20:01:31.256739 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2025-06-22 20:01:31.256745 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-06-22 20:01:31.256751 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-22 20:01:31.256757 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-22 20:01:31.256763 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-06-22 20:01:31.256770 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-06-22 20:01:31.256776 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-22 20:01:31.256782 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2025-06-22 20:01:31.256788 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-22 20:01:31.256794 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-22 20:01:31.256804 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-22 20:01:31.256811 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-22 20:01:31.256817 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-06-22 20:01:31.256823 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-22 20:01:31.256829 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-22 20:01:31.256835 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-22 20:01:31.256842 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-22 20:01:31.256848 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-22 20:01:31.256854 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-22 20:01:31.256860 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-06-22 20:01:31.256866 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-22 20:01:31.256872 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-22 20:01:31.256878 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-22 20:01:31.256885 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-22 20:01:31.256891 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-22 20:01:31.256897 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-06-22 20:01:31.256906 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-22 20:01:31.256913 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-22 20:01:31.256919 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-22 20:01:31.256925 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-22 20:01:31.256931 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-22 20:01:31.256937 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-06-22 20:01:31.256944 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-22 20:01:31.256950 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-22 20:01:31.256956 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-22 20:01:31.256962 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-22 20:01:31.256979 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-06-22 20:01:31.256985 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-06-22 20:01:31.256992 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-06-22 20:01:31.256998 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-22 20:01:31.257004 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-22 20:01:31.257010 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-22 20:01:31.257017 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-06-22 20:01:31.257023 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-06-22 20:01:31.257029 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-06-22 20:01:31.257035 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-06-22 20:01:31.257041 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-06-22 20:01:31.257048 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-06-22 20:01:31.257054 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-06-22 20:01:31.257060 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-06-22 20:01:31.257070 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-06-22 20:01:31.257076 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-06-22 20:01:31.257082 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-06-22 20:01:31.257088 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-06-22 20:01:31.257094 | orchestrator | 2025-06-22 20:01:31.257101 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2025-06-22 20:01:31.257107 | orchestrator | Sunday 22 June 2025 19:53:02 +0000 (0:00:07.919) 0:03:12.854 *********** 2025-06-22 20:01:31.257113 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.257120 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.257126 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.257133 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 20:01:31.257151 | orchestrator | 2025-06-22 20:01:31.257157 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2025-06-22 20:01:31.257164 | orchestrator | Sunday 22 June 2025 19:53:03 +0000 (0:00:01.120) 0:03:13.975 *********** 2025-06-22 20:01:31.257170 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-06-22 20:01:31.257177 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-06-22 20:01:31.257183 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-06-22 20:01:31.257189 | orchestrator | 2025-06-22 20:01:31.257195 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2025-06-22 20:01:31.257202 | orchestrator | Sunday 22 June 2025 19:53:04 +0000 (0:00:00.784) 0:03:14.759 *********** 2025-06-22 20:01:31.257208 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-06-22 20:01:31.257214 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-06-22 20:01:31.257221 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-06-22 20:01:31.257227 | orchestrator | 2025-06-22 20:01:31.257233 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2025-06-22 20:01:31.257239 | orchestrator | Sunday 22 June 2025 19:53:05 +0000 (0:00:01.597) 0:03:16.357 *********** 2025-06-22 20:01:31.257246 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:31.257252 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:31.257258 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:31.257264 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.257270 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.257277 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.257283 | orchestrator | 2025-06-22 20:01:31.257289 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2025-06-22 20:01:31.257295 | orchestrator | Sunday 22 June 2025 19:53:06 +0000 (0:00:00.581) 0:03:16.938 *********** 2025-06-22 20:01:31.257302 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:31.257311 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:31.257317 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:31.257323 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.257330 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.257336 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.257342 | orchestrator | 2025-06-22 20:01:31.257348 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2025-06-22 20:01:31.257354 | orchestrator | Sunday 22 June 2025 19:53:07 +0000 (0:00:00.718) 0:03:17.656 *********** 2025-06-22 20:01:31.257365 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.257371 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.257378 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.257384 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.257390 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.257396 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.257402 | orchestrator | 2025-06-22 20:01:31.257408 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2025-06-22 20:01:31.257415 | orchestrator | Sunday 22 June 2025 19:53:07 +0000 (0:00:00.588) 0:03:18.245 *********** 2025-06-22 20:01:31.257432 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.257439 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.257445 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.257533 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.257540 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.257546 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.257552 | orchestrator | 2025-06-22 20:01:31.257558 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2025-06-22 20:01:31.257565 | orchestrator | Sunday 22 June 2025 19:53:08 +0000 (0:00:00.897) 0:03:19.142 *********** 2025-06-22 20:01:31.257571 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.257577 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.257583 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.257589 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.257596 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.257602 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.257608 | orchestrator | 2025-06-22 20:01:31.257614 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-06-22 20:01:31.257621 | orchestrator | Sunday 22 June 2025 19:53:09 +0000 (0:00:00.779) 0:03:19.921 *********** 2025-06-22 20:01:31.257627 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.257633 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.257639 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.257646 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.257652 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.257658 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.257664 | orchestrator | 2025-06-22 20:01:31.257670 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-06-22 20:01:31.257677 | orchestrator | Sunday 22 June 2025 19:53:10 +0000 (0:00:00.843) 0:03:20.764 *********** 2025-06-22 20:01:31.257683 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.257689 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.257695 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.257702 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.257708 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.257714 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.257720 | orchestrator | 2025-06-22 20:01:31.257727 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-06-22 20:01:31.257733 | orchestrator | Sunday 22 June 2025 19:53:10 +0000 (0:00:00.655) 0:03:21.420 *********** 2025-06-22 20:01:31.257739 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.257746 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.257752 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.257758 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.257764 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.257770 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.257777 | orchestrator | 2025-06-22 20:01:31.257783 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-06-22 20:01:31.257789 | orchestrator | Sunday 22 June 2025 19:53:11 +0000 (0:00:00.842) 0:03:22.262 *********** 2025-06-22 20:01:31.257795 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.257802 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.257817 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.257823 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:31.257829 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:31.257835 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:31.257842 | orchestrator | 2025-06-22 20:01:31.257848 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2025-06-22 20:01:31.257854 | orchestrator | Sunday 22 June 2025 19:53:14 +0000 (0:00:02.721) 0:03:24.984 *********** 2025-06-22 20:01:31.257861 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:31.257867 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:31.257873 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:31.257879 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.257885 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.257892 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.257898 | orchestrator | 2025-06-22 20:01:31.257904 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2025-06-22 20:01:31.257910 | orchestrator | Sunday 22 June 2025 19:53:15 +0000 (0:00:01.033) 0:03:26.018 *********** 2025-06-22 20:01:31.257916 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:31.257923 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:31.257929 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:31.257935 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.257941 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.257948 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.257954 | orchestrator | 2025-06-22 20:01:31.257960 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2025-06-22 20:01:31.257966 | orchestrator | Sunday 22 June 2025 19:53:16 +0000 (0:00:00.775) 0:03:26.793 *********** 2025-06-22 20:01:31.257972 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.257979 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.257985 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.257991 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.257997 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.258007 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.258033 | orchestrator | 2025-06-22 20:01:31.258041 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2025-06-22 20:01:31.258047 | orchestrator | Sunday 22 June 2025 19:53:17 +0000 (0:00:01.191) 0:03:27.985 *********** 2025-06-22 20:01:31.258054 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-06-22 20:01:31.258060 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-06-22 20:01:31.258066 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-06-22 20:01:31.258073 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.258079 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.258085 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.258091 | orchestrator | 2025-06-22 20:01:31.258109 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2025-06-22 20:01:31.258116 | orchestrator | Sunday 22 June 2025 19:53:18 +0000 (0:00:00.711) 0:03:28.697 *********** 2025-06-22 20:01:31.258124 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2025-06-22 20:01:31.258132 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2025-06-22 20:01:31.258178 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2025-06-22 20:01:31.258186 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.258194 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2025-06-22 20:01:31.258201 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2025-06-22 20:01:31.258209 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2025-06-22 20:01:31.258216 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.258223 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.258230 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.258236 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.258242 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.258248 | orchestrator | 2025-06-22 20:01:31.258254 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2025-06-22 20:01:31.258260 | orchestrator | Sunday 22 June 2025 19:53:19 +0000 (0:00:00.986) 0:03:29.684 *********** 2025-06-22 20:01:31.258267 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.258273 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.258279 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.258285 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.258291 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.258297 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.258303 | orchestrator | 2025-06-22 20:01:31.258309 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2025-06-22 20:01:31.258315 | orchestrator | Sunday 22 June 2025 19:53:19 +0000 (0:00:00.606) 0:03:30.291 *********** 2025-06-22 20:01:31.258321 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.258327 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.258334 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.258340 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.258346 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.258353 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.258359 | orchestrator | 2025-06-22 20:01:31.258365 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-06-22 20:01:31.258372 | orchestrator | Sunday 22 June 2025 19:53:20 +0000 (0:00:00.822) 0:03:31.113 *********** 2025-06-22 20:01:31.258378 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.258384 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.258393 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.258400 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.258406 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.258411 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.258417 | orchestrator | 2025-06-22 20:01:31.258422 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-06-22 20:01:31.258428 | orchestrator | Sunday 22 June 2025 19:53:21 +0000 (0:00:00.621) 0:03:31.734 *********** 2025-06-22 20:01:31.258433 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.258442 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.258447 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.258452 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.258458 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.258463 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.258469 | orchestrator | 2025-06-22 20:01:31.258474 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-06-22 20:01:31.258480 | orchestrator | Sunday 22 June 2025 19:53:21 +0000 (0:00:00.813) 0:03:32.548 *********** 2025-06-22 20:01:31.258485 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.258500 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.258506 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.258512 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.258517 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.258523 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.258528 | orchestrator | 2025-06-22 20:01:31.258534 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-06-22 20:01:31.258539 | orchestrator | Sunday 22 June 2025 19:53:22 +0000 (0:00:00.632) 0:03:33.180 *********** 2025-06-22 20:01:31.258545 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:31.258550 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:31.258555 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:31.258561 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.258566 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.258572 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.258577 | orchestrator | 2025-06-22 20:01:31.258582 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-06-22 20:01:31.258588 | orchestrator | Sunday 22 June 2025 19:53:23 +0000 (0:00:00.863) 0:03:34.044 *********** 2025-06-22 20:01:31.258593 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-22 20:01:31.258599 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-22 20:01:31.258604 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-22 20:01:31.258610 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.258615 | orchestrator | 2025-06-22 20:01:31.258621 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-06-22 20:01:31.258626 | orchestrator | Sunday 22 June 2025 19:53:23 +0000 (0:00:00.450) 0:03:34.495 *********** 2025-06-22 20:01:31.258632 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-22 20:01:31.258637 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-22 20:01:31.258642 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-22 20:01:31.258648 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.258653 | orchestrator | 2025-06-22 20:01:31.258659 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-06-22 20:01:31.258664 | orchestrator | Sunday 22 June 2025 19:53:24 +0000 (0:00:00.440) 0:03:34.935 *********** 2025-06-22 20:01:31.258670 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-22 20:01:31.258675 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-22 20:01:31.258681 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-22 20:01:31.258686 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.258691 | orchestrator | 2025-06-22 20:01:31.258697 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-06-22 20:01:31.258702 | orchestrator | Sunday 22 June 2025 19:53:24 +0000 (0:00:00.459) 0:03:35.394 *********** 2025-06-22 20:01:31.258707 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:31.258713 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:31.258718 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:31.258724 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.258729 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.258734 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.258740 | orchestrator | 2025-06-22 20:01:31.258745 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-06-22 20:01:31.258754 | orchestrator | Sunday 22 June 2025 19:53:25 +0000 (0:00:00.799) 0:03:36.194 *********** 2025-06-22 20:01:31.258760 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-06-22 20:01:31.258765 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-06-22 20:01:31.258771 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-06-22 20:01:31.258776 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-06-22 20:01:31.258782 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.258787 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-06-22 20:01:31.258793 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.258798 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-06-22 20:01:31.258803 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.258809 | orchestrator | 2025-06-22 20:01:31.258814 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2025-06-22 20:01:31.258820 | orchestrator | Sunday 22 June 2025 19:53:28 +0000 (0:00:02.584) 0:03:38.778 *********** 2025-06-22 20:01:31.258825 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:01:31.258831 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:01:31.258836 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:01:31.258841 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:01:31.258847 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:01:31.258852 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:01:31.258858 | orchestrator | 2025-06-22 20:01:31.258863 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-22 20:01:31.258869 | orchestrator | Sunday 22 June 2025 19:53:31 +0000 (0:00:03.099) 0:03:41.878 *********** 2025-06-22 20:01:31.258874 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:01:31.258879 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:01:31.258885 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:01:31.258893 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:01:31.258899 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:01:31.258904 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:01:31.258910 | orchestrator | 2025-06-22 20:01:31.258915 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-06-22 20:01:31.258921 | orchestrator | Sunday 22 June 2025 19:53:32 +0000 (0:00:00.955) 0:03:42.833 *********** 2025-06-22 20:01:31.258926 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.258931 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.258937 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.258942 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:01:31.258948 | orchestrator | 2025-06-22 20:01:31.258953 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-06-22 20:01:31.258959 | orchestrator | Sunday 22 June 2025 19:53:33 +0000 (0:00:00.853) 0:03:43.686 *********** 2025-06-22 20:01:31.258964 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:01:31.258970 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:01:31.258975 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:01:31.258981 | orchestrator | 2025-06-22 20:01:31.258996 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-06-22 20:01:31.259002 | orchestrator | Sunday 22 June 2025 19:53:33 +0000 (0:00:00.314) 0:03:44.000 *********** 2025-06-22 20:01:31.259007 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:01:31.259013 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:01:31.259018 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:01:31.259024 | orchestrator | 2025-06-22 20:01:31.259029 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-06-22 20:01:31.259035 | orchestrator | Sunday 22 June 2025 19:53:34 +0000 (0:00:01.279) 0:03:45.280 *********** 2025-06-22 20:01:31.259040 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-22 20:01:31.259046 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-22 20:01:31.259051 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-22 20:01:31.259060 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.259065 | orchestrator | 2025-06-22 20:01:31.259071 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-06-22 20:01:31.259076 | orchestrator | Sunday 22 June 2025 19:53:35 +0000 (0:00:00.497) 0:03:45.778 *********** 2025-06-22 20:01:31.259082 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:01:31.259087 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:01:31.259093 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:01:31.259098 | orchestrator | 2025-06-22 20:01:31.259103 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-06-22 20:01:31.259109 | orchestrator | Sunday 22 June 2025 19:53:35 +0000 (0:00:00.297) 0:03:46.075 *********** 2025-06-22 20:01:31.259114 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.259120 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.259125 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.259131 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 20:01:31.259136 | orchestrator | 2025-06-22 20:01:31.259153 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-06-22 20:01:31.259158 | orchestrator | Sunday 22 June 2025 19:53:36 +0000 (0:00:00.755) 0:03:46.831 *********** 2025-06-22 20:01:31.259164 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-22 20:01:31.259169 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-22 20:01:31.259174 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-22 20:01:31.259180 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.259185 | orchestrator | 2025-06-22 20:01:31.259190 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-06-22 20:01:31.259196 | orchestrator | Sunday 22 June 2025 19:53:36 +0000 (0:00:00.354) 0:03:47.186 *********** 2025-06-22 20:01:31.259201 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.259207 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.259212 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.259218 | orchestrator | 2025-06-22 20:01:31.259223 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-06-22 20:01:31.259228 | orchestrator | Sunday 22 June 2025 19:53:36 +0000 (0:00:00.291) 0:03:47.477 *********** 2025-06-22 20:01:31.259234 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.259239 | orchestrator | 2025-06-22 20:01:31.259244 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-06-22 20:01:31.259250 | orchestrator | Sunday 22 June 2025 19:53:37 +0000 (0:00:00.183) 0:03:47.661 *********** 2025-06-22 20:01:31.259255 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.259261 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.259266 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.259271 | orchestrator | 2025-06-22 20:01:31.259277 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-06-22 20:01:31.259282 | orchestrator | Sunday 22 June 2025 19:53:37 +0000 (0:00:00.258) 0:03:47.920 *********** 2025-06-22 20:01:31.259287 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.259293 | orchestrator | 2025-06-22 20:01:31.259298 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-06-22 20:01:31.259304 | orchestrator | Sunday 22 June 2025 19:53:37 +0000 (0:00:00.206) 0:03:48.127 *********** 2025-06-22 20:01:31.259309 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.259315 | orchestrator | 2025-06-22 20:01:31.259320 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-06-22 20:01:31.259325 | orchestrator | Sunday 22 June 2025 19:53:37 +0000 (0:00:00.214) 0:03:48.341 *********** 2025-06-22 20:01:31.259331 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.259336 | orchestrator | 2025-06-22 20:01:31.259342 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-06-22 20:01:31.259350 | orchestrator | Sunday 22 June 2025 19:53:38 +0000 (0:00:00.297) 0:03:48.639 *********** 2025-06-22 20:01:31.259356 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.259361 | orchestrator | 2025-06-22 20:01:31.259367 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-06-22 20:01:31.259375 | orchestrator | Sunday 22 June 2025 19:53:38 +0000 (0:00:00.240) 0:03:48.879 *********** 2025-06-22 20:01:31.259380 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.259386 | orchestrator | 2025-06-22 20:01:31.259392 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-06-22 20:01:31.259397 | orchestrator | Sunday 22 June 2025 19:53:38 +0000 (0:00:00.225) 0:03:49.104 *********** 2025-06-22 20:01:31.259402 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-22 20:01:31.259408 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-22 20:01:31.259413 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-22 20:01:31.259419 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.259424 | orchestrator | 2025-06-22 20:01:31.259429 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-06-22 20:01:31.259435 | orchestrator | Sunday 22 June 2025 19:53:38 +0000 (0:00:00.365) 0:03:49.470 *********** 2025-06-22 20:01:31.259440 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.259455 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.259461 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.259466 | orchestrator | 2025-06-22 20:01:31.259472 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-06-22 20:01:31.259477 | orchestrator | Sunday 22 June 2025 19:53:39 +0000 (0:00:00.323) 0:03:49.793 *********** 2025-06-22 20:01:31.259483 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.259488 | orchestrator | 2025-06-22 20:01:31.259493 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-06-22 20:01:31.259499 | orchestrator | Sunday 22 June 2025 19:53:39 +0000 (0:00:00.205) 0:03:49.999 *********** 2025-06-22 20:01:31.259504 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.259510 | orchestrator | 2025-06-22 20:01:31.259515 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-06-22 20:01:31.259520 | orchestrator | Sunday 22 June 2025 19:53:39 +0000 (0:00:00.185) 0:03:50.185 *********** 2025-06-22 20:01:31.259526 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.259531 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.259536 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.259542 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 20:01:31.259547 | orchestrator | 2025-06-22 20:01:31.259553 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-06-22 20:01:31.259558 | orchestrator | Sunday 22 June 2025 19:53:40 +0000 (0:00:00.952) 0:03:51.137 *********** 2025-06-22 20:01:31.259563 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:31.259569 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:31.259574 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:31.259579 | orchestrator | 2025-06-22 20:01:31.259585 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-06-22 20:01:31.259590 | orchestrator | Sunday 22 June 2025 19:53:40 +0000 (0:00:00.291) 0:03:51.429 *********** 2025-06-22 20:01:31.259596 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:01:31.259601 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:01:31.259606 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:01:31.259612 | orchestrator | 2025-06-22 20:01:31.259617 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-06-22 20:01:31.259623 | orchestrator | Sunday 22 June 2025 19:53:42 +0000 (0:00:01.282) 0:03:52.711 *********** 2025-06-22 20:01:31.259628 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-22 20:01:31.259634 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-22 20:01:31.259639 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-22 20:01:31.259647 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.259652 | orchestrator | 2025-06-22 20:01:31.259658 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-06-22 20:01:31.259663 | orchestrator | Sunday 22 June 2025 19:53:42 +0000 (0:00:00.819) 0:03:53.531 *********** 2025-06-22 20:01:31.259669 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:31.259674 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:31.259680 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:31.259685 | orchestrator | 2025-06-22 20:01:31.259690 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-06-22 20:01:31.259696 | orchestrator | Sunday 22 June 2025 19:53:43 +0000 (0:00:00.278) 0:03:53.810 *********** 2025-06-22 20:01:31.259701 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.259706 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.259712 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.259717 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 20:01:31.259723 | orchestrator | 2025-06-22 20:01:31.259728 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-06-22 20:01:31.259733 | orchestrator | Sunday 22 June 2025 19:53:44 +0000 (0:00:00.896) 0:03:54.706 *********** 2025-06-22 20:01:31.259739 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:31.259744 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:31.259749 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:31.259755 | orchestrator | 2025-06-22 20:01:31.259760 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-06-22 20:01:31.259766 | orchestrator | Sunday 22 June 2025 19:53:44 +0000 (0:00:00.285) 0:03:54.991 *********** 2025-06-22 20:01:31.259771 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:01:31.259776 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:01:31.259782 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:01:31.259787 | orchestrator | 2025-06-22 20:01:31.259793 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-06-22 20:01:31.259798 | orchestrator | Sunday 22 June 2025 19:53:45 +0000 (0:00:01.369) 0:03:56.361 *********** 2025-06-22 20:01:31.259803 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-22 20:01:31.259809 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-22 20:01:31.259814 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-22 20:01:31.259823 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.259828 | orchestrator | 2025-06-22 20:01:31.259833 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-06-22 20:01:31.259839 | orchestrator | Sunday 22 June 2025 19:53:46 +0000 (0:00:00.811) 0:03:57.172 *********** 2025-06-22 20:01:31.259844 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:31.259849 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:31.259855 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:31.259860 | orchestrator | 2025-06-22 20:01:31.259865 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2025-06-22 20:01:31.259871 | orchestrator | Sunday 22 June 2025 19:53:46 +0000 (0:00:00.321) 0:03:57.494 *********** 2025-06-22 20:01:31.259876 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.259881 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.259887 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.259892 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.259898 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.259903 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.259908 | orchestrator | 2025-06-22 20:01:31.259914 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-06-22 20:01:31.259929 | orchestrator | Sunday 22 June 2025 19:53:47 +0000 (0:00:00.828) 0:03:58.323 *********** 2025-06-22 20:01:31.259935 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.259941 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.259950 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.259955 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:01:31.259961 | orchestrator | 2025-06-22 20:01:31.259966 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-06-22 20:01:31.259972 | orchestrator | Sunday 22 June 2025 19:53:48 +0000 (0:00:01.072) 0:03:59.395 *********** 2025-06-22 20:01:31.259977 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:01:31.259982 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:01:31.259988 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:01:31.259993 | orchestrator | 2025-06-22 20:01:31.259998 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-06-22 20:01:31.260004 | orchestrator | Sunday 22 June 2025 19:53:49 +0000 (0:00:00.289) 0:03:59.684 *********** 2025-06-22 20:01:31.260009 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:01:31.260015 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:01:31.260020 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:01:31.260025 | orchestrator | 2025-06-22 20:01:31.260031 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-06-22 20:01:31.260036 | orchestrator | Sunday 22 June 2025 19:53:50 +0000 (0:00:01.442) 0:04:01.127 *********** 2025-06-22 20:01:31.260042 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-22 20:01:31.260047 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-22 20:01:31.260052 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-22 20:01:31.260058 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.260063 | orchestrator | 2025-06-22 20:01:31.260068 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-06-22 20:01:31.260074 | orchestrator | Sunday 22 June 2025 19:53:51 +0000 (0:00:00.687) 0:04:01.814 *********** 2025-06-22 20:01:31.260079 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:01:31.260084 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:01:31.260090 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:01:31.260095 | orchestrator | 2025-06-22 20:01:31.260101 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-06-22 20:01:31.260106 | orchestrator | 2025-06-22 20:01:31.260111 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-22 20:01:31.260117 | orchestrator | Sunday 22 June 2025 19:53:51 +0000 (0:00:00.803) 0:04:02.617 *********** 2025-06-22 20:01:31.260123 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:01:31.260128 | orchestrator | 2025-06-22 20:01:31.260133 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-22 20:01:31.260151 | orchestrator | Sunday 22 June 2025 19:53:52 +0000 (0:00:00.459) 0:04:03.076 *********** 2025-06-22 20:01:31.260156 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:01:31.260162 | orchestrator | 2025-06-22 20:01:31.260167 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-22 20:01:31.260173 | orchestrator | Sunday 22 June 2025 19:53:53 +0000 (0:00:00.691) 0:04:03.768 *********** 2025-06-22 20:01:31.260178 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:01:31.260183 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:01:31.260189 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:01:31.260194 | orchestrator | 2025-06-22 20:01:31.260200 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-22 20:01:31.260205 | orchestrator | Sunday 22 June 2025 19:53:53 +0000 (0:00:00.812) 0:04:04.580 *********** 2025-06-22 20:01:31.260211 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.260216 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.260221 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.260227 | orchestrator | 2025-06-22 20:01:31.260232 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-22 20:01:31.260241 | orchestrator | Sunday 22 June 2025 19:53:54 +0000 (0:00:00.371) 0:04:04.952 *********** 2025-06-22 20:01:31.260247 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.260252 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.260257 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.260263 | orchestrator | 2025-06-22 20:01:31.260268 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-22 20:01:31.260274 | orchestrator | Sunday 22 June 2025 19:53:54 +0000 (0:00:00.378) 0:04:05.330 *********** 2025-06-22 20:01:31.260279 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.260284 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.260290 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.260295 | orchestrator | 2025-06-22 20:01:31.260301 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-22 20:01:31.260309 | orchestrator | Sunday 22 June 2025 19:53:55 +0000 (0:00:00.495) 0:04:05.826 *********** 2025-06-22 20:01:31.260315 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:01:31.260320 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:01:31.260326 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:01:31.260331 | orchestrator | 2025-06-22 20:01:31.260336 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-22 20:01:31.260342 | orchestrator | Sunday 22 June 2025 19:53:55 +0000 (0:00:00.764) 0:04:06.590 *********** 2025-06-22 20:01:31.260347 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.260352 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.260358 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.260363 | orchestrator | 2025-06-22 20:01:31.260369 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-22 20:01:31.260374 | orchestrator | Sunday 22 June 2025 19:53:56 +0000 (0:00:00.282) 0:04:06.872 *********** 2025-06-22 20:01:31.260380 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.260385 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.260390 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.260396 | orchestrator | 2025-06-22 20:01:31.260411 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-22 20:01:31.260417 | orchestrator | Sunday 22 June 2025 19:53:56 +0000 (0:00:00.279) 0:04:07.151 *********** 2025-06-22 20:01:31.260422 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:01:31.260428 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:01:31.260433 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:01:31.260438 | orchestrator | 2025-06-22 20:01:31.260444 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-22 20:01:31.260449 | orchestrator | Sunday 22 June 2025 19:53:57 +0000 (0:00:00.862) 0:04:08.014 *********** 2025-06-22 20:01:31.260455 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:01:31.260460 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:01:31.260466 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:01:31.260471 | orchestrator | 2025-06-22 20:01:31.260476 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-22 20:01:31.260482 | orchestrator | Sunday 22 June 2025 19:53:58 +0000 (0:00:00.722) 0:04:08.737 *********** 2025-06-22 20:01:31.260487 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.260493 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.260498 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.260504 | orchestrator | 2025-06-22 20:01:31.260509 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-22 20:01:31.260514 | orchestrator | Sunday 22 June 2025 19:53:58 +0000 (0:00:00.271) 0:04:09.008 *********** 2025-06-22 20:01:31.260520 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:01:31.260525 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:01:31.260531 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:01:31.260536 | orchestrator | 2025-06-22 20:01:31.260541 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-22 20:01:31.260547 | orchestrator | Sunday 22 June 2025 19:53:58 +0000 (0:00:00.293) 0:04:09.301 *********** 2025-06-22 20:01:31.260558 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.260563 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.260569 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.260574 | orchestrator | 2025-06-22 20:01:31.260580 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-22 20:01:31.260585 | orchestrator | Sunday 22 June 2025 19:53:59 +0000 (0:00:00.435) 0:04:09.737 *********** 2025-06-22 20:01:31.260590 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.260596 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.260601 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.260606 | orchestrator | 2025-06-22 20:01:31.260612 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-22 20:01:31.260617 | orchestrator | Sunday 22 June 2025 19:53:59 +0000 (0:00:00.326) 0:04:10.063 *********** 2025-06-22 20:01:31.260623 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.260628 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.260633 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.260639 | orchestrator | 2025-06-22 20:01:31.260644 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-22 20:01:31.260650 | orchestrator | Sunday 22 June 2025 19:53:59 +0000 (0:00:00.279) 0:04:10.343 *********** 2025-06-22 20:01:31.260655 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.260661 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.260666 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.260671 | orchestrator | 2025-06-22 20:01:31.260677 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-22 20:01:31.260682 | orchestrator | Sunday 22 June 2025 19:53:59 +0000 (0:00:00.270) 0:04:10.614 *********** 2025-06-22 20:01:31.260688 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.260693 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.260698 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.260704 | orchestrator | 2025-06-22 20:01:31.260709 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-22 20:01:31.260715 | orchestrator | Sunday 22 June 2025 19:54:00 +0000 (0:00:00.475) 0:04:11.089 *********** 2025-06-22 20:01:31.260720 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:01:31.260725 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:01:31.260731 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:01:31.260736 | orchestrator | 2025-06-22 20:01:31.260742 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-22 20:01:31.260747 | orchestrator | Sunday 22 June 2025 19:54:00 +0000 (0:00:00.327) 0:04:11.416 *********** 2025-06-22 20:01:31.260753 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:01:31.260758 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:01:31.260763 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:01:31.260769 | orchestrator | 2025-06-22 20:01:31.260774 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-22 20:01:31.260779 | orchestrator | Sunday 22 June 2025 19:54:01 +0000 (0:00:00.353) 0:04:11.770 *********** 2025-06-22 20:01:31.260785 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:01:31.260790 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:01:31.260796 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:01:31.260801 | orchestrator | 2025-06-22 20:01:31.260807 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2025-06-22 20:01:31.260815 | orchestrator | Sunday 22 June 2025 19:54:01 +0000 (0:00:00.856) 0:04:12.626 *********** 2025-06-22 20:01:31.260820 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:01:31.260826 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:01:31.260831 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:01:31.260837 | orchestrator | 2025-06-22 20:01:31.260842 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2025-06-22 20:01:31.260847 | orchestrator | Sunday 22 June 2025 19:54:02 +0000 (0:00:00.298) 0:04:12.924 *********** 2025-06-22 20:01:31.260853 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:01:31.260862 | orchestrator | 2025-06-22 20:01:31.260868 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2025-06-22 20:01:31.260873 | orchestrator | Sunday 22 June 2025 19:54:02 +0000 (0:00:00.622) 0:04:13.547 *********** 2025-06-22 20:01:31.260879 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.260884 | orchestrator | 2025-06-22 20:01:31.260889 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2025-06-22 20:01:31.260904 | orchestrator | Sunday 22 June 2025 19:54:03 +0000 (0:00:00.158) 0:04:13.706 *********** 2025-06-22 20:01:31.260910 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-06-22 20:01:31.260916 | orchestrator | 2025-06-22 20:01:31.260921 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2025-06-22 20:01:31.260927 | orchestrator | Sunday 22 June 2025 19:54:04 +0000 (0:00:01.356) 0:04:15.062 *********** 2025-06-22 20:01:31.260932 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:01:31.260937 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:01:31.260942 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:01:31.260948 | orchestrator | 2025-06-22 20:01:31.260953 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2025-06-22 20:01:31.260959 | orchestrator | Sunday 22 June 2025 19:54:04 +0000 (0:00:00.387) 0:04:15.450 *********** 2025-06-22 20:01:31.260964 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:01:31.260969 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:01:31.260975 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:01:31.260980 | orchestrator | 2025-06-22 20:01:31.260985 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2025-06-22 20:01:31.260991 | orchestrator | Sunday 22 June 2025 19:54:05 +0000 (0:00:00.463) 0:04:15.913 *********** 2025-06-22 20:01:31.260996 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:01:31.261002 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:01:31.261007 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:01:31.261012 | orchestrator | 2025-06-22 20:01:31.261018 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2025-06-22 20:01:31.261023 | orchestrator | Sunday 22 June 2025 19:54:06 +0000 (0:00:01.460) 0:04:17.374 *********** 2025-06-22 20:01:31.261028 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:01:31.261034 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:01:31.261039 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:01:31.261044 | orchestrator | 2025-06-22 20:01:31.261050 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2025-06-22 20:01:31.261055 | orchestrator | Sunday 22 June 2025 19:54:07 +0000 (0:00:01.114) 0:04:18.488 *********** 2025-06-22 20:01:31.261061 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:01:31.261066 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:01:31.261071 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:01:31.261077 | orchestrator | 2025-06-22 20:01:31.261082 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2025-06-22 20:01:31.261088 | orchestrator | Sunday 22 June 2025 19:54:08 +0000 (0:00:00.690) 0:04:19.179 *********** 2025-06-22 20:01:31.261093 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:01:31.261098 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:01:31.261104 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:01:31.261109 | orchestrator | 2025-06-22 20:01:31.261115 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2025-06-22 20:01:31.261120 | orchestrator | Sunday 22 June 2025 19:54:09 +0000 (0:00:00.674) 0:04:19.853 *********** 2025-06-22 20:01:31.261125 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:01:31.261131 | orchestrator | 2025-06-22 20:01:31.261136 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2025-06-22 20:01:31.261153 | orchestrator | Sunday 22 June 2025 19:54:10 +0000 (0:00:01.371) 0:04:21.225 *********** 2025-06-22 20:01:31.261159 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:01:31.261164 | orchestrator | 2025-06-22 20:01:31.261170 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2025-06-22 20:01:31.261179 | orchestrator | Sunday 22 June 2025 19:54:11 +0000 (0:00:00.702) 0:04:21.928 *********** 2025-06-22 20:01:31.261184 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-22 20:01:31.261190 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 20:01:31.261195 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 20:01:31.261200 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-22 20:01:31.261206 | orchestrator | ok: [testbed-node-1] => (item=None) 2025-06-22 20:01:31.261211 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-22 20:01:31.261217 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-22 20:01:31.261222 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2025-06-22 20:01:31.261227 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-22 20:01:31.261233 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2025-06-22 20:01:31.261238 | orchestrator | ok: [testbed-node-2] => (item=None) 2025-06-22 20:01:31.261244 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2025-06-22 20:01:31.261249 | orchestrator | 2025-06-22 20:01:31.261255 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2025-06-22 20:01:31.261260 | orchestrator | Sunday 22 June 2025 19:54:15 +0000 (0:00:03.850) 0:04:25.778 *********** 2025-06-22 20:01:31.261265 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:01:31.261271 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:01:31.261276 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:01:31.261281 | orchestrator | 2025-06-22 20:01:31.261287 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2025-06-22 20:01:31.261295 | orchestrator | Sunday 22 June 2025 19:54:16 +0000 (0:00:01.447) 0:04:27.225 *********** 2025-06-22 20:01:31.261300 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:01:31.261306 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:01:31.261311 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:01:31.261317 | orchestrator | 2025-06-22 20:01:31.261322 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2025-06-22 20:01:31.261328 | orchestrator | Sunday 22 June 2025 19:54:16 +0000 (0:00:00.348) 0:04:27.574 *********** 2025-06-22 20:01:31.261333 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:01:31.261338 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:01:31.261344 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:01:31.261349 | orchestrator | 2025-06-22 20:01:31.261355 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2025-06-22 20:01:31.261360 | orchestrator | Sunday 22 June 2025 19:54:17 +0000 (0:00:00.353) 0:04:27.927 *********** 2025-06-22 20:01:31.261365 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:01:31.261371 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:01:31.261376 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:01:31.261382 | orchestrator | 2025-06-22 20:01:31.261397 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2025-06-22 20:01:31.261403 | orchestrator | Sunday 22 June 2025 19:54:19 +0000 (0:00:01.814) 0:04:29.742 *********** 2025-06-22 20:01:31.261409 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:01:31.261414 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:01:31.261419 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:01:31.261425 | orchestrator | 2025-06-22 20:01:31.261430 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2025-06-22 20:01:31.261436 | orchestrator | Sunday 22 June 2025 19:54:20 +0000 (0:00:01.664) 0:04:31.407 *********** 2025-06-22 20:01:31.261441 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.261446 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.261452 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.261457 | orchestrator | 2025-06-22 20:01:31.261463 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2025-06-22 20:01:31.261472 | orchestrator | Sunday 22 June 2025 19:54:21 +0000 (0:00:00.320) 0:04:31.727 *********** 2025-06-22 20:01:31.261477 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:01:31.261482 | orchestrator | 2025-06-22 20:01:31.261488 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2025-06-22 20:01:31.261493 | orchestrator | Sunday 22 June 2025 19:54:21 +0000 (0:00:00.703) 0:04:32.431 *********** 2025-06-22 20:01:31.261499 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.261504 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.261510 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.261515 | orchestrator | 2025-06-22 20:01:31.261521 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2025-06-22 20:01:31.261526 | orchestrator | Sunday 22 June 2025 19:54:22 +0000 (0:00:00.606) 0:04:33.038 *********** 2025-06-22 20:01:31.261531 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.261537 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.261542 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.261548 | orchestrator | 2025-06-22 20:01:31.261553 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2025-06-22 20:01:31.261559 | orchestrator | Sunday 22 June 2025 19:54:22 +0000 (0:00:00.323) 0:04:33.361 *********** 2025-06-22 20:01:31.261564 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:01:31.261570 | orchestrator | 2025-06-22 20:01:31.261575 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2025-06-22 20:01:31.261581 | orchestrator | Sunday 22 June 2025 19:54:23 +0000 (0:00:00.473) 0:04:33.835 *********** 2025-06-22 20:01:31.261586 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:01:31.261591 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:01:31.261597 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:01:31.261602 | orchestrator | 2025-06-22 20:01:31.261608 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2025-06-22 20:01:31.261613 | orchestrator | Sunday 22 June 2025 19:54:25 +0000 (0:00:01.979) 0:04:35.814 *********** 2025-06-22 20:01:31.261619 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:01:31.261624 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:01:31.261629 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:01:31.261635 | orchestrator | 2025-06-22 20:01:31.261640 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2025-06-22 20:01:31.261646 | orchestrator | Sunday 22 June 2025 19:54:26 +0000 (0:00:00.998) 0:04:36.813 *********** 2025-06-22 20:01:31.261651 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:01:31.261656 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:01:31.261662 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:01:31.261667 | orchestrator | 2025-06-22 20:01:31.261673 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2025-06-22 20:01:31.261678 | orchestrator | Sunday 22 June 2025 19:54:27 +0000 (0:00:01.620) 0:04:38.433 *********** 2025-06-22 20:01:31.261683 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:01:31.261689 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:01:31.261694 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:01:31.261700 | orchestrator | 2025-06-22 20:01:31.261705 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2025-06-22 20:01:31.261711 | orchestrator | Sunday 22 June 2025 19:54:29 +0000 (0:00:01.964) 0:04:40.398 *********** 2025-06-22 20:01:31.261716 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:01:31.261721 | orchestrator | 2025-06-22 20:01:31.261727 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2025-06-22 20:01:31.261732 | orchestrator | Sunday 22 June 2025 19:54:30 +0000 (0:00:00.681) 0:04:41.080 *********** 2025-06-22 20:01:31.261738 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2025-06-22 20:01:31.261746 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:01:31.261752 | orchestrator | 2025-06-22 20:01:31.261760 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2025-06-22 20:01:31.261766 | orchestrator | Sunday 22 June 2025 19:54:52 +0000 (0:00:21.924) 0:05:03.005 *********** 2025-06-22 20:01:31.261771 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:01:31.261776 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:01:31.261782 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:01:31.261787 | orchestrator | 2025-06-22 20:01:31.261793 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2025-06-22 20:01:31.261798 | orchestrator | Sunday 22 June 2025 19:55:03 +0000 (0:00:10.729) 0:05:13.734 *********** 2025-06-22 20:01:31.261804 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.261809 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.261814 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.261820 | orchestrator | 2025-06-22 20:01:31.261825 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2025-06-22 20:01:31.261831 | orchestrator | Sunday 22 June 2025 19:55:03 +0000 (0:00:00.253) 0:05:13.988 *********** 2025-06-22 20:01:31.261847 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__a1997d5ac6a197f898cd4263d803ebe0452d8fe3'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2025-06-22 20:01:31.261854 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__a1997d5ac6a197f898cd4263d803ebe0452d8fe3'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2025-06-22 20:01:31.261860 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__a1997d5ac6a197f898cd4263d803ebe0452d8fe3'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2025-06-22 20:01:31.261867 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__a1997d5ac6a197f898cd4263d803ebe0452d8fe3'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2025-06-22 20:01:31.261872 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__a1997d5ac6a197f898cd4263d803ebe0452d8fe3'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2025-06-22 20:01:31.261879 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__a1997d5ac6a197f898cd4263d803ebe0452d8fe3'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__a1997d5ac6a197f898cd4263d803ebe0452d8fe3'}])  2025-06-22 20:01:31.261886 | orchestrator | 2025-06-22 20:01:31.261891 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-22 20:01:31.261897 | orchestrator | Sunday 22 June 2025 19:55:18 +0000 (0:00:15.377) 0:05:29.365 *********** 2025-06-22 20:01:31.261902 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.261908 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.261917 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.261922 | orchestrator | 2025-06-22 20:01:31.261928 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-06-22 20:01:31.261933 | orchestrator | Sunday 22 June 2025 19:55:19 +0000 (0:00:00.313) 0:05:29.679 *********** 2025-06-22 20:01:31.261939 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:01:31.261944 | orchestrator | 2025-06-22 20:01:31.261950 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-06-22 20:01:31.261955 | orchestrator | Sunday 22 June 2025 19:55:19 +0000 (0:00:00.690) 0:05:30.370 *********** 2025-06-22 20:01:31.261961 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:01:31.261966 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:01:31.261971 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:01:31.261977 | orchestrator | 2025-06-22 20:01:31.261982 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-06-22 20:01:31.261988 | orchestrator | Sunday 22 June 2025 19:55:20 +0000 (0:00:00.313) 0:05:30.683 *********** 2025-06-22 20:01:31.261993 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.261998 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.262007 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.262030 | orchestrator | 2025-06-22 20:01:31.262037 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-06-22 20:01:31.262043 | orchestrator | Sunday 22 June 2025 19:55:20 +0000 (0:00:00.326) 0:05:31.009 *********** 2025-06-22 20:01:31.262048 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-22 20:01:31.262054 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-22 20:01:31.262059 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-22 20:01:31.262065 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.262070 | orchestrator | 2025-06-22 20:01:31.262076 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-06-22 20:01:31.262081 | orchestrator | Sunday 22 June 2025 19:55:21 +0000 (0:00:00.708) 0:05:31.718 *********** 2025-06-22 20:01:31.262086 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:01:31.262092 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:01:31.262097 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:01:31.262103 | orchestrator | 2025-06-22 20:01:31.262119 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-06-22 20:01:31.262125 | orchestrator | 2025-06-22 20:01:31.262130 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-22 20:01:31.262136 | orchestrator | Sunday 22 June 2025 19:55:21 +0000 (0:00:00.699) 0:05:32.417 *********** 2025-06-22 20:01:31.262171 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:01:31.262177 | orchestrator | 2025-06-22 20:01:31.262182 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-22 20:01:31.262188 | orchestrator | Sunday 22 June 2025 19:55:22 +0000 (0:00:00.470) 0:05:32.888 *********** 2025-06-22 20:01:31.262193 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:01:31.262199 | orchestrator | 2025-06-22 20:01:31.262204 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-22 20:01:31.262210 | orchestrator | Sunday 22 June 2025 19:55:22 +0000 (0:00:00.602) 0:05:33.491 *********** 2025-06-22 20:01:31.262215 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:01:31.262224 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:01:31.262233 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:01:31.262243 | orchestrator | 2025-06-22 20:01:31.262252 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-22 20:01:31.262260 | orchestrator | Sunday 22 June 2025 19:55:23 +0000 (0:00:00.646) 0:05:34.138 *********** 2025-06-22 20:01:31.262269 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.262285 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.262295 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.262304 | orchestrator | 2025-06-22 20:01:31.262314 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-22 20:01:31.262320 | orchestrator | Sunday 22 June 2025 19:55:23 +0000 (0:00:00.285) 0:05:34.423 *********** 2025-06-22 20:01:31.262326 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.262331 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.262336 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.262342 | orchestrator | 2025-06-22 20:01:31.262347 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-22 20:01:31.262353 | orchestrator | Sunday 22 June 2025 19:55:24 +0000 (0:00:00.559) 0:05:34.983 *********** 2025-06-22 20:01:31.262358 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.262363 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.262368 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.262372 | orchestrator | 2025-06-22 20:01:31.262377 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-22 20:01:31.262382 | orchestrator | Sunday 22 June 2025 19:55:24 +0000 (0:00:00.350) 0:05:35.333 *********** 2025-06-22 20:01:31.262387 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:01:31.262391 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:01:31.262396 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:01:31.262401 | orchestrator | 2025-06-22 20:01:31.262406 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-22 20:01:31.262411 | orchestrator | Sunday 22 June 2025 19:55:25 +0000 (0:00:00.732) 0:05:36.065 *********** 2025-06-22 20:01:31.262415 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.262420 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.262425 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.262430 | orchestrator | 2025-06-22 20:01:31.262434 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-22 20:01:31.262439 | orchestrator | Sunday 22 June 2025 19:55:25 +0000 (0:00:00.351) 0:05:36.417 *********** 2025-06-22 20:01:31.262444 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.262449 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.262454 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.262458 | orchestrator | 2025-06-22 20:01:31.262463 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-22 20:01:31.262468 | orchestrator | Sunday 22 June 2025 19:55:26 +0000 (0:00:00.750) 0:05:37.168 *********** 2025-06-22 20:01:31.262473 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:01:31.262478 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:01:31.262482 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:01:31.262487 | orchestrator | 2025-06-22 20:01:31.262492 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-22 20:01:31.262497 | orchestrator | Sunday 22 June 2025 19:55:27 +0000 (0:00:00.804) 0:05:37.972 *********** 2025-06-22 20:01:31.262502 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:01:31.262506 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:01:31.262511 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:01:31.262516 | orchestrator | 2025-06-22 20:01:31.262521 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-22 20:01:31.262526 | orchestrator | Sunday 22 June 2025 19:55:28 +0000 (0:00:00.735) 0:05:38.707 *********** 2025-06-22 20:01:31.262530 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.262535 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.262540 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.262545 | orchestrator | 2025-06-22 20:01:31.262553 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-22 20:01:31.262558 | orchestrator | Sunday 22 June 2025 19:55:28 +0000 (0:00:00.305) 0:05:39.013 *********** 2025-06-22 20:01:31.262563 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:01:31.262568 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:01:31.262576 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:01:31.262581 | orchestrator | 2025-06-22 20:01:31.262586 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-22 20:01:31.262591 | orchestrator | Sunday 22 June 2025 19:55:29 +0000 (0:00:00.666) 0:05:39.679 *********** 2025-06-22 20:01:31.262595 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.262600 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.262605 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.262610 | orchestrator | 2025-06-22 20:01:31.262615 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-22 20:01:31.262619 | orchestrator | Sunday 22 June 2025 19:55:29 +0000 (0:00:00.387) 0:05:40.067 *********** 2025-06-22 20:01:31.262624 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.262629 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.262644 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.262649 | orchestrator | 2025-06-22 20:01:31.262654 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-22 20:01:31.262659 | orchestrator | Sunday 22 June 2025 19:55:29 +0000 (0:00:00.341) 0:05:40.409 *********** 2025-06-22 20:01:31.262663 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.262668 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.262673 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.262678 | orchestrator | 2025-06-22 20:01:31.262683 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-22 20:01:31.262688 | orchestrator | Sunday 22 June 2025 19:55:30 +0000 (0:00:00.301) 0:05:40.710 *********** 2025-06-22 20:01:31.262692 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.262697 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.262702 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.262707 | orchestrator | 2025-06-22 20:01:31.262711 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-22 20:01:31.262716 | orchestrator | Sunday 22 June 2025 19:55:30 +0000 (0:00:00.636) 0:05:41.346 *********** 2025-06-22 20:01:31.262721 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.262726 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.262731 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.262736 | orchestrator | 2025-06-22 20:01:31.262740 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-22 20:01:31.262745 | orchestrator | Sunday 22 June 2025 19:55:31 +0000 (0:00:00.324) 0:05:41.671 *********** 2025-06-22 20:01:31.262750 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:01:31.262755 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:01:31.262760 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:01:31.262765 | orchestrator | 2025-06-22 20:01:31.262769 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-22 20:01:31.262774 | orchestrator | Sunday 22 June 2025 19:55:31 +0000 (0:00:00.346) 0:05:42.017 *********** 2025-06-22 20:01:31.262779 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:01:31.262784 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:01:31.262789 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:01:31.262793 | orchestrator | 2025-06-22 20:01:31.262798 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-22 20:01:31.262803 | orchestrator | Sunday 22 June 2025 19:55:31 +0000 (0:00:00.301) 0:05:42.319 *********** 2025-06-22 20:01:31.262808 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:01:31.262813 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:01:31.262817 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:01:31.262822 | orchestrator | 2025-06-22 20:01:31.262827 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2025-06-22 20:01:31.262832 | orchestrator | Sunday 22 June 2025 19:55:32 +0000 (0:00:00.700) 0:05:43.020 *********** 2025-06-22 20:01:31.262837 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-22 20:01:31.262842 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-22 20:01:31.262846 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-22 20:01:31.262854 | orchestrator | 2025-06-22 20:01:31.262859 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2025-06-22 20:01:31.262864 | orchestrator | Sunday 22 June 2025 19:55:32 +0000 (0:00:00.573) 0:05:43.594 *********** 2025-06-22 20:01:31.262869 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:01:31.262876 | orchestrator | 2025-06-22 20:01:31.262884 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2025-06-22 20:01:31.262893 | orchestrator | Sunday 22 June 2025 19:55:33 +0000 (0:00:00.494) 0:05:44.088 *********** 2025-06-22 20:01:31.262898 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:01:31.262903 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:01:31.262907 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:01:31.262912 | orchestrator | 2025-06-22 20:01:31.262917 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2025-06-22 20:01:31.262922 | orchestrator | Sunday 22 June 2025 19:55:34 +0000 (0:00:00.866) 0:05:44.954 *********** 2025-06-22 20:01:31.262927 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.262931 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.262936 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.262941 | orchestrator | 2025-06-22 20:01:31.262945 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2025-06-22 20:01:31.262950 | orchestrator | Sunday 22 June 2025 19:55:34 +0000 (0:00:00.347) 0:05:45.302 *********** 2025-06-22 20:01:31.262955 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-22 20:01:31.262960 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-22 20:01:31.262965 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-22 20:01:31.262970 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-06-22 20:01:31.262974 | orchestrator | 2025-06-22 20:01:31.262979 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2025-06-22 20:01:31.262987 | orchestrator | Sunday 22 June 2025 19:55:45 +0000 (0:00:10.520) 0:05:55.822 *********** 2025-06-22 20:01:31.262992 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:01:31.262997 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:01:31.263002 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:01:31.263006 | orchestrator | 2025-06-22 20:01:31.263011 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2025-06-22 20:01:31.263016 | orchestrator | Sunday 22 June 2025 19:55:45 +0000 (0:00:00.406) 0:05:56.229 *********** 2025-06-22 20:01:31.263021 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-06-22 20:01:31.263026 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-06-22 20:01:31.263030 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-06-22 20:01:31.263035 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-06-22 20:01:31.263040 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 20:01:31.263045 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 20:01:31.263050 | orchestrator | 2025-06-22 20:01:31.263064 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2025-06-22 20:01:31.263069 | orchestrator | Sunday 22 June 2025 19:55:48 +0000 (0:00:02.459) 0:05:58.689 *********** 2025-06-22 20:01:31.263074 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-06-22 20:01:31.263079 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-06-22 20:01:31.263083 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-06-22 20:01:31.263088 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-22 20:01:31.263093 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-06-22 20:01:31.263098 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-06-22 20:01:31.263103 | orchestrator | 2025-06-22 20:01:31.263108 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2025-06-22 20:01:31.263118 | orchestrator | Sunday 22 June 2025 19:55:49 +0000 (0:00:01.584) 0:06:00.273 *********** 2025-06-22 20:01:31.263123 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:01:31.263127 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:01:31.263132 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:01:31.263137 | orchestrator | 2025-06-22 20:01:31.263154 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2025-06-22 20:01:31.263159 | orchestrator | Sunday 22 June 2025 19:55:50 +0000 (0:00:00.761) 0:06:01.035 *********** 2025-06-22 20:01:31.263164 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.263169 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.263174 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.263178 | orchestrator | 2025-06-22 20:01:31.263183 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2025-06-22 20:01:31.263188 | orchestrator | Sunday 22 June 2025 19:55:50 +0000 (0:00:00.288) 0:06:01.323 *********** 2025-06-22 20:01:31.263193 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.263198 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.263202 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.263207 | orchestrator | 2025-06-22 20:01:31.263212 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2025-06-22 20:01:31.263217 | orchestrator | Sunday 22 June 2025 19:55:50 +0000 (0:00:00.293) 0:06:01.616 *********** 2025-06-22 20:01:31.263222 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:01:31.263226 | orchestrator | 2025-06-22 20:01:31.263231 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2025-06-22 20:01:31.263236 | orchestrator | Sunday 22 June 2025 19:55:51 +0000 (0:00:00.753) 0:06:02.370 *********** 2025-06-22 20:01:31.263241 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.263245 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.263250 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.263255 | orchestrator | 2025-06-22 20:01:31.263260 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2025-06-22 20:01:31.263265 | orchestrator | Sunday 22 June 2025 19:55:52 +0000 (0:00:00.315) 0:06:02.686 *********** 2025-06-22 20:01:31.263270 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.263274 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.263279 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.263284 | orchestrator | 2025-06-22 20:01:31.263289 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2025-06-22 20:01:31.263293 | orchestrator | Sunday 22 June 2025 19:55:52 +0000 (0:00:00.302) 0:06:02.988 *********** 2025-06-22 20:01:31.263298 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:01:31.263303 | orchestrator | 2025-06-22 20:01:31.263308 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2025-06-22 20:01:31.263313 | orchestrator | Sunday 22 June 2025 19:55:53 +0000 (0:00:00.749) 0:06:03.738 *********** 2025-06-22 20:01:31.263317 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:01:31.263322 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:01:31.263327 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:01:31.263332 | orchestrator | 2025-06-22 20:01:31.263336 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2025-06-22 20:01:31.263341 | orchestrator | Sunday 22 June 2025 19:55:54 +0000 (0:00:01.244) 0:06:04.983 *********** 2025-06-22 20:01:31.263346 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:01:31.263351 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:01:31.263356 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:01:31.263360 | orchestrator | 2025-06-22 20:01:31.263365 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2025-06-22 20:01:31.263370 | orchestrator | Sunday 22 June 2025 19:55:55 +0000 (0:00:01.236) 0:06:06.219 *********** 2025-06-22 20:01:31.263375 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:01:31.263383 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:01:31.263388 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:01:31.263393 | orchestrator | 2025-06-22 20:01:31.263398 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2025-06-22 20:01:31.263402 | orchestrator | Sunday 22 June 2025 19:55:57 +0000 (0:00:02.158) 0:06:08.378 *********** 2025-06-22 20:01:31.263410 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:01:31.263415 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:01:31.263420 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:01:31.263424 | orchestrator | 2025-06-22 20:01:31.263429 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2025-06-22 20:01:31.263434 | orchestrator | Sunday 22 June 2025 19:55:59 +0000 (0:00:02.145) 0:06:10.524 *********** 2025-06-22 20:01:31.263439 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.263444 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.263448 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-06-22 20:01:31.263453 | orchestrator | 2025-06-22 20:01:31.263458 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2025-06-22 20:01:31.263463 | orchestrator | Sunday 22 June 2025 19:56:00 +0000 (0:00:00.465) 0:06:10.989 *********** 2025-06-22 20:01:31.263468 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2025-06-22 20:01:31.263483 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2025-06-22 20:01:31.263488 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2025-06-22 20:01:31.263493 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2025-06-22 20:01:31.263498 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2025-06-22 20:01:31.263502 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (25 retries left). 2025-06-22 20:01:31.263507 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-06-22 20:01:31.263512 | orchestrator | 2025-06-22 20:01:31.263517 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2025-06-22 20:01:31.263522 | orchestrator | Sunday 22 June 2025 19:56:36 +0000 (0:00:36.313) 0:06:47.303 *********** 2025-06-22 20:01:31.263527 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-06-22 20:01:31.263531 | orchestrator | 2025-06-22 20:01:31.263536 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-06-22 20:01:31.263541 | orchestrator | Sunday 22 June 2025 19:56:38 +0000 (0:00:01.539) 0:06:48.843 *********** 2025-06-22 20:01:31.263546 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:01:31.263551 | orchestrator | 2025-06-22 20:01:31.263555 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2025-06-22 20:01:31.263560 | orchestrator | Sunday 22 June 2025 19:56:39 +0000 (0:00:00.923) 0:06:49.766 *********** 2025-06-22 20:01:31.263565 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:01:31.263570 | orchestrator | 2025-06-22 20:01:31.263574 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2025-06-22 20:01:31.263579 | orchestrator | Sunday 22 June 2025 19:56:39 +0000 (0:00:00.146) 0:06:49.912 *********** 2025-06-22 20:01:31.263584 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-06-22 20:01:31.263589 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-06-22 20:01:31.263594 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-06-22 20:01:31.263598 | orchestrator | 2025-06-22 20:01:31.263603 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2025-06-22 20:01:31.263608 | orchestrator | Sunday 22 June 2025 19:56:45 +0000 (0:00:06.317) 0:06:56.230 *********** 2025-06-22 20:01:31.263617 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-06-22 20:01:31.263622 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-06-22 20:01:31.263626 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-06-22 20:01:31.263631 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-06-22 20:01:31.263636 | orchestrator | 2025-06-22 20:01:31.263641 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-22 20:01:31.263646 | orchestrator | Sunday 22 June 2025 19:56:50 +0000 (0:00:05.035) 0:07:01.266 *********** 2025-06-22 20:01:31.263650 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:01:31.263655 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:01:31.263660 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:01:31.263665 | orchestrator | 2025-06-22 20:01:31.263669 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-06-22 20:01:31.263674 | orchestrator | Sunday 22 June 2025 19:56:51 +0000 (0:00:00.983) 0:07:02.250 *********** 2025-06-22 20:01:31.263679 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:01:31.263684 | orchestrator | 2025-06-22 20:01:31.263689 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-06-22 20:01:31.263694 | orchestrator | Sunday 22 June 2025 19:56:52 +0000 (0:00:00.559) 0:07:02.809 *********** 2025-06-22 20:01:31.263698 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:01:31.263703 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:01:31.263708 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:01:31.263713 | orchestrator | 2025-06-22 20:01:31.263717 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-06-22 20:01:31.263722 | orchestrator | Sunday 22 June 2025 19:56:52 +0000 (0:00:00.325) 0:07:03.135 *********** 2025-06-22 20:01:31.263727 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:01:31.263732 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:01:31.263736 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:01:31.263741 | orchestrator | 2025-06-22 20:01:31.263746 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-06-22 20:01:31.263751 | orchestrator | Sunday 22 June 2025 19:56:54 +0000 (0:00:01.801) 0:07:04.937 *********** 2025-06-22 20:01:31.263758 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-06-22 20:01:31.263763 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-06-22 20:01:31.263768 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-06-22 20:01:31.263773 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.263777 | orchestrator | 2025-06-22 20:01:31.263782 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-06-22 20:01:31.263787 | orchestrator | Sunday 22 June 2025 19:56:54 +0000 (0:00:00.640) 0:07:05.577 *********** 2025-06-22 20:01:31.263792 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:01:31.263796 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:01:31.263801 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:01:31.263806 | orchestrator | 2025-06-22 20:01:31.263811 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-06-22 20:01:31.263816 | orchestrator | 2025-06-22 20:01:31.263821 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-22 20:01:31.263825 | orchestrator | Sunday 22 June 2025 19:56:55 +0000 (0:00:00.551) 0:07:06.129 *********** 2025-06-22 20:01:31.263840 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 20:01:31.263845 | orchestrator | 2025-06-22 20:01:31.263850 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-22 20:01:31.263855 | orchestrator | Sunday 22 June 2025 19:56:56 +0000 (0:00:00.626) 0:07:06.756 *********** 2025-06-22 20:01:31.263860 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 20:01:31.263867 | orchestrator | 2025-06-22 20:01:31.263872 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-22 20:01:31.263877 | orchestrator | Sunday 22 June 2025 19:56:56 +0000 (0:00:00.470) 0:07:07.227 *********** 2025-06-22 20:01:31.263882 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.263887 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.263892 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.263896 | orchestrator | 2025-06-22 20:01:31.263901 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-22 20:01:31.263906 | orchestrator | Sunday 22 June 2025 19:56:56 +0000 (0:00:00.262) 0:07:07.489 *********** 2025-06-22 20:01:31.263911 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:31.263915 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:31.263920 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:31.263925 | orchestrator | 2025-06-22 20:01:31.263930 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-22 20:01:31.263935 | orchestrator | Sunday 22 June 2025 19:56:57 +0000 (0:00:00.967) 0:07:08.457 *********** 2025-06-22 20:01:31.263940 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:31.263944 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:31.263949 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:31.263954 | orchestrator | 2025-06-22 20:01:31.263959 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-22 20:01:31.263964 | orchestrator | Sunday 22 June 2025 19:56:58 +0000 (0:00:00.777) 0:07:09.235 *********** 2025-06-22 20:01:31.263968 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:31.263973 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:31.263978 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:31.263983 | orchestrator | 2025-06-22 20:01:31.263987 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-22 20:01:31.263992 | orchestrator | Sunday 22 June 2025 19:56:59 +0000 (0:00:00.675) 0:07:09.910 *********** 2025-06-22 20:01:31.263997 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.264002 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.264007 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.264011 | orchestrator | 2025-06-22 20:01:31.264016 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-22 20:01:31.264021 | orchestrator | Sunday 22 June 2025 19:56:59 +0000 (0:00:00.246) 0:07:10.156 *********** 2025-06-22 20:01:31.264026 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.264031 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.264036 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.264040 | orchestrator | 2025-06-22 20:01:31.264045 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-22 20:01:31.264050 | orchestrator | Sunday 22 June 2025 19:57:00 +0000 (0:00:00.491) 0:07:10.647 *********** 2025-06-22 20:01:31.264055 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.264060 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.264064 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.264069 | orchestrator | 2025-06-22 20:01:31.264074 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-22 20:01:31.264079 | orchestrator | Sunday 22 June 2025 19:57:00 +0000 (0:00:00.332) 0:07:10.980 *********** 2025-06-22 20:01:31.264084 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:31.264089 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:31.264093 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:31.264098 | orchestrator | 2025-06-22 20:01:31.264103 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-22 20:01:31.264108 | orchestrator | Sunday 22 June 2025 19:57:01 +0000 (0:00:00.759) 0:07:11.740 *********** 2025-06-22 20:01:31.264112 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:31.264117 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:31.264122 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:31.264127 | orchestrator | 2025-06-22 20:01:31.264132 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-22 20:01:31.264150 | orchestrator | Sunday 22 June 2025 19:57:01 +0000 (0:00:00.738) 0:07:12.479 *********** 2025-06-22 20:01:31.264155 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.264160 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.264164 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.264169 | orchestrator | 2025-06-22 20:01:31.264174 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-22 20:01:31.264179 | orchestrator | Sunday 22 June 2025 19:57:02 +0000 (0:00:00.629) 0:07:13.108 *********** 2025-06-22 20:01:31.264184 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.264188 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.264193 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.264198 | orchestrator | 2025-06-22 20:01:31.264205 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-22 20:01:31.264210 | orchestrator | Sunday 22 June 2025 19:57:02 +0000 (0:00:00.318) 0:07:13.426 *********** 2025-06-22 20:01:31.264215 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:31.264220 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:31.264225 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:31.264230 | orchestrator | 2025-06-22 20:01:31.264234 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-22 20:01:31.264239 | orchestrator | Sunday 22 June 2025 19:57:03 +0000 (0:00:00.336) 0:07:13.763 *********** 2025-06-22 20:01:31.264244 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:31.264249 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:31.264254 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:31.264258 | orchestrator | 2025-06-22 20:01:31.264263 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-22 20:01:31.264268 | orchestrator | Sunday 22 June 2025 19:57:03 +0000 (0:00:00.317) 0:07:14.081 *********** 2025-06-22 20:01:31.264273 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:31.264278 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:31.264285 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:31.264290 | orchestrator | 2025-06-22 20:01:31.264295 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-22 20:01:31.264300 | orchestrator | Sunday 22 June 2025 19:57:04 +0000 (0:00:00.651) 0:07:14.733 *********** 2025-06-22 20:01:31.264304 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.264309 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.264314 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.264319 | orchestrator | 2025-06-22 20:01:31.264323 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-22 20:01:31.264328 | orchestrator | Sunday 22 June 2025 19:57:04 +0000 (0:00:00.322) 0:07:15.055 *********** 2025-06-22 20:01:31.264333 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.264338 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.264343 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.264347 | orchestrator | 2025-06-22 20:01:31.264352 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-22 20:01:31.264357 | orchestrator | Sunday 22 June 2025 19:57:04 +0000 (0:00:00.359) 0:07:15.414 *********** 2025-06-22 20:01:31.264362 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.264367 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.264371 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.264376 | orchestrator | 2025-06-22 20:01:31.264381 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-22 20:01:31.264386 | orchestrator | Sunday 22 June 2025 19:57:05 +0000 (0:00:00.325) 0:07:15.740 *********** 2025-06-22 20:01:31.264391 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:31.264395 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:31.264400 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:31.264405 | orchestrator | 2025-06-22 20:01:31.264410 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-22 20:01:31.264415 | orchestrator | Sunday 22 June 2025 19:57:05 +0000 (0:00:00.613) 0:07:16.354 *********** 2025-06-22 20:01:31.264423 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:31.264427 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:31.264432 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:31.264437 | orchestrator | 2025-06-22 20:01:31.264442 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2025-06-22 20:01:31.264447 | orchestrator | Sunday 22 June 2025 19:57:06 +0000 (0:00:00.546) 0:07:16.901 *********** 2025-06-22 20:01:31.264451 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:31.264456 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:31.264461 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:31.264466 | orchestrator | 2025-06-22 20:01:31.264471 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2025-06-22 20:01:31.264475 | orchestrator | Sunday 22 June 2025 19:57:06 +0000 (0:00:00.317) 0:07:17.218 *********** 2025-06-22 20:01:31.264480 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-22 20:01:31.264485 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-22 20:01:31.264490 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-22 20:01:31.264495 | orchestrator | 2025-06-22 20:01:31.264500 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2025-06-22 20:01:31.264504 | orchestrator | Sunday 22 June 2025 19:57:07 +0000 (0:00:00.892) 0:07:18.110 *********** 2025-06-22 20:01:31.264509 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 20:01:31.264514 | orchestrator | 2025-06-22 20:01:31.264519 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2025-06-22 20:01:31.264523 | orchestrator | Sunday 22 June 2025 19:57:08 +0000 (0:00:00.804) 0:07:18.915 *********** 2025-06-22 20:01:31.264528 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.264533 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.264538 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.264543 | orchestrator | 2025-06-22 20:01:31.264547 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2025-06-22 20:01:31.264552 | orchestrator | Sunday 22 June 2025 19:57:08 +0000 (0:00:00.315) 0:07:19.231 *********** 2025-06-22 20:01:31.264557 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.264562 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.264567 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.264571 | orchestrator | 2025-06-22 20:01:31.264576 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2025-06-22 20:01:31.264581 | orchestrator | Sunday 22 June 2025 19:57:08 +0000 (0:00:00.302) 0:07:19.533 *********** 2025-06-22 20:01:31.264586 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:31.264590 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:31.264595 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:31.264600 | orchestrator | 2025-06-22 20:01:31.264605 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2025-06-22 20:01:31.264610 | orchestrator | Sunday 22 June 2025 19:57:09 +0000 (0:00:00.965) 0:07:20.499 *********** 2025-06-22 20:01:31.264617 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:31.264622 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:31.264627 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:31.264631 | orchestrator | 2025-06-22 20:01:31.264636 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2025-06-22 20:01:31.264641 | orchestrator | Sunday 22 June 2025 19:57:10 +0000 (0:00:00.352) 0:07:20.851 *********** 2025-06-22 20:01:31.264646 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-06-22 20:01:31.264651 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-06-22 20:01:31.264656 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-06-22 20:01:31.264664 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-06-22 20:01:31.264673 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-06-22 20:01:31.264678 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-06-22 20:01:31.264683 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-06-22 20:01:31.264688 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-06-22 20:01:31.264692 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-06-22 20:01:31.264697 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-06-22 20:01:31.264702 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-06-22 20:01:31.264707 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-06-22 20:01:31.264711 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-06-22 20:01:31.264716 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-06-22 20:01:31.264721 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-06-22 20:01:31.264726 | orchestrator | 2025-06-22 20:01:31.264731 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2025-06-22 20:01:31.264736 | orchestrator | Sunday 22 June 2025 19:57:14 +0000 (0:00:04.163) 0:07:25.015 *********** 2025-06-22 20:01:31.264740 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.264745 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.264750 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.264755 | orchestrator | 2025-06-22 20:01:31.264759 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2025-06-22 20:01:31.264764 | orchestrator | Sunday 22 June 2025 19:57:14 +0000 (0:00:00.279) 0:07:25.294 *********** 2025-06-22 20:01:31.264769 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 20:01:31.264774 | orchestrator | 2025-06-22 20:01:31.264779 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2025-06-22 20:01:31.264783 | orchestrator | Sunday 22 June 2025 19:57:15 +0000 (0:00:00.641) 0:07:25.936 *********** 2025-06-22 20:01:31.264788 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-06-22 20:01:31.264793 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-06-22 20:01:31.264798 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-06-22 20:01:31.264802 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-06-22 20:01:31.264807 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-06-22 20:01:31.264812 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-06-22 20:01:31.264817 | orchestrator | 2025-06-22 20:01:31.264822 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2025-06-22 20:01:31.264827 | orchestrator | Sunday 22 June 2025 19:57:16 +0000 (0:00:01.038) 0:07:26.974 *********** 2025-06-22 20:01:31.264831 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 20:01:31.264836 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-22 20:01:31.264841 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-22 20:01:31.264846 | orchestrator | 2025-06-22 20:01:31.264851 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2025-06-22 20:01:31.264856 | orchestrator | Sunday 22 June 2025 19:57:18 +0000 (0:00:02.179) 0:07:29.154 *********** 2025-06-22 20:01:31.264860 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-22 20:01:31.264865 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-22 20:01:31.264870 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:01:31.264878 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-22 20:01:31.264883 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-06-22 20:01:31.264888 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:01:31.264892 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-22 20:01:31.264897 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-06-22 20:01:31.264902 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:01:31.264907 | orchestrator | 2025-06-22 20:01:31.264911 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2025-06-22 20:01:31.264916 | orchestrator | Sunday 22 June 2025 19:57:20 +0000 (0:00:01.509) 0:07:30.663 *********** 2025-06-22 20:01:31.264921 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-22 20:01:31.264926 | orchestrator | 2025-06-22 20:01:31.264931 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2025-06-22 20:01:31.264936 | orchestrator | Sunday 22 June 2025 19:57:22 +0000 (0:00:02.143) 0:07:32.807 *********** 2025-06-22 20:01:31.264940 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 20:01:31.264945 | orchestrator | 2025-06-22 20:01:31.264950 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2025-06-22 20:01:31.264955 | orchestrator | Sunday 22 June 2025 19:57:22 +0000 (0:00:00.561) 0:07:33.368 *********** 2025-06-22 20:01:31.264960 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-420ac1c2-ff56-5c56-8dd6-abe068aa03ad', 'data_vg': 'ceph-420ac1c2-ff56-5c56-8dd6-abe068aa03ad'}) 2025-06-22 20:01:31.264965 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-3108d6cc-64da-58c4-8e22-262ec3caa421', 'data_vg': 'ceph-3108d6cc-64da-58c4-8e22-262ec3caa421'}) 2025-06-22 20:01:31.264973 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-ffee4eed-4396-59ea-b922-2a73e3bf4ca0', 'data_vg': 'ceph-ffee4eed-4396-59ea-b922-2a73e3bf4ca0'}) 2025-06-22 20:01:31.264978 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-39fb6ae0-c3e6-59b9-8b54-9251bb7c5136', 'data_vg': 'ceph-39fb6ae0-c3e6-59b9-8b54-9251bb7c5136'}) 2025-06-22 20:01:31.264983 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-a67f9737-0c9f-5177-b2d5-f4c811291d8a', 'data_vg': 'ceph-a67f9737-0c9f-5177-b2d5-f4c811291d8a'}) 2025-06-22 20:01:31.264988 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-21b37dc5-48e7-5a6c-9835-121dab35d047', 'data_vg': 'ceph-21b37dc5-48e7-5a6c-9835-121dab35d047'}) 2025-06-22 20:01:31.264993 | orchestrator | 2025-06-22 20:01:31.264998 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2025-06-22 20:01:31.265002 | orchestrator | Sunday 22 June 2025 19:58:06 +0000 (0:00:44.215) 0:08:17.583 *********** 2025-06-22 20:01:31.265007 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.265012 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.265017 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.265022 | orchestrator | 2025-06-22 20:01:31.265026 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2025-06-22 20:01:31.265031 | orchestrator | Sunday 22 June 2025 19:58:07 +0000 (0:00:00.425) 0:08:18.009 *********** 2025-06-22 20:01:31.265036 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 20:01:31.265041 | orchestrator | 2025-06-22 20:01:31.265046 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2025-06-22 20:01:31.265069 | orchestrator | Sunday 22 June 2025 19:58:07 +0000 (0:00:00.460) 0:08:18.469 *********** 2025-06-22 20:01:31.265074 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:31.265079 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:31.265084 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:31.265089 | orchestrator | 2025-06-22 20:01:31.265094 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2025-06-22 20:01:31.265098 | orchestrator | Sunday 22 June 2025 19:58:08 +0000 (0:00:00.617) 0:08:19.087 *********** 2025-06-22 20:01:31.265109 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:31.265114 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:31.265118 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:31.265123 | orchestrator | 2025-06-22 20:01:31.265128 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2025-06-22 20:01:31.265133 | orchestrator | Sunday 22 June 2025 19:58:11 +0000 (0:00:02.913) 0:08:22.000 *********** 2025-06-22 20:01:31.265161 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 20:01:31.265166 | orchestrator | 2025-06-22 20:01:31.265171 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2025-06-22 20:01:31.265176 | orchestrator | Sunday 22 June 2025 19:58:11 +0000 (0:00:00.543) 0:08:22.544 *********** 2025-06-22 20:01:31.265181 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:01:31.265186 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:01:31.265191 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:01:31.265195 | orchestrator | 2025-06-22 20:01:31.265200 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2025-06-22 20:01:31.265205 | orchestrator | Sunday 22 June 2025 19:58:13 +0000 (0:00:01.286) 0:08:23.830 *********** 2025-06-22 20:01:31.265210 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:01:31.265215 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:01:31.265219 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:01:31.265224 | orchestrator | 2025-06-22 20:01:31.265229 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2025-06-22 20:01:31.265234 | orchestrator | Sunday 22 June 2025 19:58:14 +0000 (0:00:01.420) 0:08:25.251 *********** 2025-06-22 20:01:31.265239 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:01:31.265243 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:01:31.265248 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:01:31.265253 | orchestrator | 2025-06-22 20:01:31.265258 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2025-06-22 20:01:31.265262 | orchestrator | Sunday 22 June 2025 19:58:16 +0000 (0:00:01.818) 0:08:27.070 *********** 2025-06-22 20:01:31.265267 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.265272 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.265277 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.265282 | orchestrator | 2025-06-22 20:01:31.265286 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2025-06-22 20:01:31.265291 | orchestrator | Sunday 22 June 2025 19:58:16 +0000 (0:00:00.363) 0:08:27.433 *********** 2025-06-22 20:01:31.265296 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.265301 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.265305 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.265310 | orchestrator | 2025-06-22 20:01:31.265315 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2025-06-22 20:01:31.265323 | orchestrator | Sunday 22 June 2025 19:58:17 +0000 (0:00:00.314) 0:08:27.748 *********** 2025-06-22 20:01:31.265328 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-06-22 20:01:31.265333 | orchestrator | ok: [testbed-node-4] => (item=1) 2025-06-22 20:01:31.265338 | orchestrator | ok: [testbed-node-5] => (item=4) 2025-06-22 20:01:31.265342 | orchestrator | ok: [testbed-node-3] => (item=5) 2025-06-22 20:01:31.265347 | orchestrator | ok: [testbed-node-4] => (item=3) 2025-06-22 20:01:31.265352 | orchestrator | ok: [testbed-node-5] => (item=2) 2025-06-22 20:01:31.265357 | orchestrator | 2025-06-22 20:01:31.265361 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2025-06-22 20:01:31.265366 | orchestrator | Sunday 22 June 2025 19:58:18 +0000 (0:00:01.273) 0:08:29.022 *********** 2025-06-22 20:01:31.265371 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-06-22 20:01:31.265376 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-06-22 20:01:31.265381 | orchestrator | changed: [testbed-node-5] => (item=4) 2025-06-22 20:01:31.265385 | orchestrator | changed: [testbed-node-3] => (item=5) 2025-06-22 20:01:31.265400 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-06-22 20:01:31.265405 | orchestrator | changed: [testbed-node-4] => (item=3) 2025-06-22 20:01:31.265409 | orchestrator | 2025-06-22 20:01:31.265414 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2025-06-22 20:01:31.265419 | orchestrator | Sunday 22 June 2025 19:58:20 +0000 (0:00:02.189) 0:08:31.211 *********** 2025-06-22 20:01:31.265424 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-06-22 20:01:31.265429 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-06-22 20:01:31.265433 | orchestrator | changed: [testbed-node-5] => (item=4) 2025-06-22 20:01:31.265438 | orchestrator | changed: [testbed-node-3] => (item=5) 2025-06-22 20:01:31.265443 | orchestrator | changed: [testbed-node-4] => (item=3) 2025-06-22 20:01:31.265448 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-06-22 20:01:31.265453 | orchestrator | 2025-06-22 20:01:31.265457 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2025-06-22 20:01:31.265462 | orchestrator | Sunday 22 June 2025 19:58:24 +0000 (0:00:03.782) 0:08:34.994 *********** 2025-06-22 20:01:31.265467 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.265472 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.265477 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-06-22 20:01:31.265481 | orchestrator | 2025-06-22 20:01:31.265486 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2025-06-22 20:01:31.265491 | orchestrator | Sunday 22 June 2025 19:58:27 +0000 (0:00:02.652) 0:08:37.646 *********** 2025-06-22 20:01:31.265496 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.265501 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.265505 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2025-06-22 20:01:31.265510 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-06-22 20:01:31.265515 | orchestrator | 2025-06-22 20:01:31.265520 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2025-06-22 20:01:31.265525 | orchestrator | Sunday 22 June 2025 19:58:40 +0000 (0:00:13.002) 0:08:50.648 *********** 2025-06-22 20:01:31.265529 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.265534 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.265539 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.265544 | orchestrator | 2025-06-22 20:01:31.265548 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-22 20:01:31.265553 | orchestrator | Sunday 22 June 2025 19:58:40 +0000 (0:00:00.858) 0:08:51.507 *********** 2025-06-22 20:01:31.265558 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.265563 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.265568 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.265572 | orchestrator | 2025-06-22 20:01:31.265577 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-06-22 20:01:31.265582 | orchestrator | Sunday 22 June 2025 19:58:41 +0000 (0:00:00.555) 0:08:52.063 *********** 2025-06-22 20:01:31.265587 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 20:01:31.265592 | orchestrator | 2025-06-22 20:01:31.265597 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-06-22 20:01:31.265601 | orchestrator | Sunday 22 June 2025 19:58:41 +0000 (0:00:00.541) 0:08:52.605 *********** 2025-06-22 20:01:31.265606 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-22 20:01:31.265611 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-22 20:01:31.265616 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-22 20:01:31.265621 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.265625 | orchestrator | 2025-06-22 20:01:31.265630 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-06-22 20:01:31.265635 | orchestrator | Sunday 22 June 2025 19:58:42 +0000 (0:00:00.371) 0:08:52.976 *********** 2025-06-22 20:01:31.265643 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.265648 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.265653 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.265658 | orchestrator | 2025-06-22 20:01:31.265662 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-06-22 20:01:31.265667 | orchestrator | Sunday 22 June 2025 19:58:42 +0000 (0:00:00.353) 0:08:53.329 *********** 2025-06-22 20:01:31.265672 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.265677 | orchestrator | 2025-06-22 20:01:31.265682 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-06-22 20:01:31.265686 | orchestrator | Sunday 22 June 2025 19:58:42 +0000 (0:00:00.207) 0:08:53.536 *********** 2025-06-22 20:01:31.265691 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.265695 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.265700 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.265704 | orchestrator | 2025-06-22 20:01:31.265709 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-06-22 20:01:31.265716 | orchestrator | Sunday 22 June 2025 19:58:43 +0000 (0:00:00.555) 0:08:54.092 *********** 2025-06-22 20:01:31.265721 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.265725 | orchestrator | 2025-06-22 20:01:31.265730 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-06-22 20:01:31.265734 | orchestrator | Sunday 22 June 2025 19:58:43 +0000 (0:00:00.218) 0:08:54.310 *********** 2025-06-22 20:01:31.265739 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.265743 | orchestrator | 2025-06-22 20:01:31.265748 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-06-22 20:01:31.265753 | orchestrator | Sunday 22 June 2025 19:58:43 +0000 (0:00:00.218) 0:08:54.529 *********** 2025-06-22 20:01:31.265757 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.265762 | orchestrator | 2025-06-22 20:01:31.265766 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-06-22 20:01:31.265771 | orchestrator | Sunday 22 June 2025 19:58:44 +0000 (0:00:00.156) 0:08:54.685 *********** 2025-06-22 20:01:31.265775 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.265780 | orchestrator | 2025-06-22 20:01:31.265787 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-06-22 20:01:31.265792 | orchestrator | Sunday 22 June 2025 19:58:44 +0000 (0:00:00.219) 0:08:54.905 *********** 2025-06-22 20:01:31.265796 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.265801 | orchestrator | 2025-06-22 20:01:31.265805 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-06-22 20:01:31.265810 | orchestrator | Sunday 22 June 2025 19:58:44 +0000 (0:00:00.237) 0:08:55.143 *********** 2025-06-22 20:01:31.265815 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-22 20:01:31.265819 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-22 20:01:31.265824 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-22 20:01:31.265828 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.265833 | orchestrator | 2025-06-22 20:01:31.265837 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-06-22 20:01:31.265842 | orchestrator | Sunday 22 June 2025 19:58:44 +0000 (0:00:00.377) 0:08:55.520 *********** 2025-06-22 20:01:31.265847 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.265851 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.265856 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.265860 | orchestrator | 2025-06-22 20:01:31.265865 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-06-22 20:01:31.265869 | orchestrator | Sunday 22 June 2025 19:58:45 +0000 (0:00:00.326) 0:08:55.847 *********** 2025-06-22 20:01:31.265874 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.265878 | orchestrator | 2025-06-22 20:01:31.265883 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-06-22 20:01:31.265891 | orchestrator | Sunday 22 June 2025 19:58:46 +0000 (0:00:00.791) 0:08:56.638 *********** 2025-06-22 20:01:31.265895 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.265900 | orchestrator | 2025-06-22 20:01:31.265904 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-06-22 20:01:31.265909 | orchestrator | 2025-06-22 20:01:31.265914 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-22 20:01:31.265918 | orchestrator | Sunday 22 June 2025 19:58:46 +0000 (0:00:00.653) 0:08:57.291 *********** 2025-06-22 20:01:31.265923 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:01:31.265928 | orchestrator | 2025-06-22 20:01:31.265932 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-22 20:01:31.265937 | orchestrator | Sunday 22 June 2025 19:58:47 +0000 (0:00:01.155) 0:08:58.446 *********** 2025-06-22 20:01:31.265942 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:01:31.265946 | orchestrator | 2025-06-22 20:01:31.265951 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-22 20:01:31.265955 | orchestrator | Sunday 22 June 2025 19:58:49 +0000 (0:00:01.214) 0:08:59.660 *********** 2025-06-22 20:01:31.265960 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.265964 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.265969 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.265973 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:01:31.265978 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:01:31.265982 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:01:31.265987 | orchestrator | 2025-06-22 20:01:31.265992 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-22 20:01:31.265996 | orchestrator | Sunday 22 June 2025 19:58:50 +0000 (0:00:01.263) 0:09:00.923 *********** 2025-06-22 20:01:31.266001 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.266005 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:31.266010 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.266039 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:31.266043 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.266048 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:31.266052 | orchestrator | 2025-06-22 20:01:31.266057 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-22 20:01:31.266062 | orchestrator | Sunday 22 June 2025 19:58:51 +0000 (0:00:00.719) 0:09:01.643 *********** 2025-06-22 20:01:31.266066 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.266071 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.266075 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:31.266080 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:31.266084 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:31.266089 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.266093 | orchestrator | 2025-06-22 20:01:31.266098 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-22 20:01:31.266102 | orchestrator | Sunday 22 June 2025 19:58:51 +0000 (0:00:00.840) 0:09:02.484 *********** 2025-06-22 20:01:31.266107 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.266111 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.266118 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:31.266123 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:31.266128 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.266132 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:31.266136 | orchestrator | 2025-06-22 20:01:31.266152 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-22 20:01:31.266157 | orchestrator | Sunday 22 June 2025 19:58:52 +0000 (0:00:00.739) 0:09:03.224 *********** 2025-06-22 20:01:31.266161 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.266170 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.266175 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.266180 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:01:31.266184 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:01:31.266189 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:01:31.266193 | orchestrator | 2025-06-22 20:01:31.266198 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-22 20:01:31.266202 | orchestrator | Sunday 22 June 2025 19:58:53 +0000 (0:00:01.232) 0:09:04.457 *********** 2025-06-22 20:01:31.266207 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.266211 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.266219 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.266224 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.266228 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.266233 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.266237 | orchestrator | 2025-06-22 20:01:31.266242 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-22 20:01:31.266246 | orchestrator | Sunday 22 June 2025 19:58:54 +0000 (0:00:00.650) 0:09:05.107 *********** 2025-06-22 20:01:31.266251 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.266256 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.266260 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.266265 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.266269 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.266274 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.266278 | orchestrator | 2025-06-22 20:01:31.266283 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-22 20:01:31.266288 | orchestrator | Sunday 22 June 2025 19:58:55 +0000 (0:00:00.790) 0:09:05.898 *********** 2025-06-22 20:01:31.266292 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:31.266297 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:31.266301 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:31.266306 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:01:31.266310 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:01:31.266315 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:01:31.266319 | orchestrator | 2025-06-22 20:01:31.266324 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-22 20:01:31.266329 | orchestrator | Sunday 22 June 2025 19:58:56 +0000 (0:00:01.100) 0:09:06.998 *********** 2025-06-22 20:01:31.266333 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:31.266338 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:31.266342 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:31.266347 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:01:31.266351 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:01:31.266356 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:01:31.266360 | orchestrator | 2025-06-22 20:01:31.266365 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-22 20:01:31.266369 | orchestrator | Sunday 22 June 2025 19:58:57 +0000 (0:00:01.467) 0:09:08.465 *********** 2025-06-22 20:01:31.266374 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.266379 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.266383 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.266388 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.266392 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.266397 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.266401 | orchestrator | 2025-06-22 20:01:31.266406 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-22 20:01:31.266411 | orchestrator | Sunday 22 June 2025 19:58:58 +0000 (0:00:00.581) 0:09:09.047 *********** 2025-06-22 20:01:31.266415 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.266420 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.266424 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.266429 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:01:31.266433 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:01:31.266442 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:01:31.266446 | orchestrator | 2025-06-22 20:01:31.266451 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-22 20:01:31.266455 | orchestrator | Sunday 22 June 2025 19:58:59 +0000 (0:00:00.853) 0:09:09.901 *********** 2025-06-22 20:01:31.266460 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:31.266465 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:31.266469 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:31.266474 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.266478 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.266483 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.266487 | orchestrator | 2025-06-22 20:01:31.266492 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-22 20:01:31.266496 | orchestrator | Sunday 22 June 2025 19:58:59 +0000 (0:00:00.650) 0:09:10.551 *********** 2025-06-22 20:01:31.266501 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:31.266505 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:31.266510 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:31.266514 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.266519 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.266523 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.266528 | orchestrator | 2025-06-22 20:01:31.266533 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-22 20:01:31.266537 | orchestrator | Sunday 22 June 2025 19:59:00 +0000 (0:00:00.832) 0:09:11.384 *********** 2025-06-22 20:01:31.266542 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:31.266546 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:31.266551 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:31.266555 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.266560 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.266564 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.266569 | orchestrator | 2025-06-22 20:01:31.266573 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-22 20:01:31.266578 | orchestrator | Sunday 22 June 2025 19:59:01 +0000 (0:00:00.660) 0:09:12.045 *********** 2025-06-22 20:01:31.266583 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.266590 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.266594 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.266599 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.266603 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.266608 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.266613 | orchestrator | 2025-06-22 20:01:31.266617 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-22 20:01:31.266622 | orchestrator | Sunday 22 June 2025 19:59:02 +0000 (0:00:00.789) 0:09:12.834 *********** 2025-06-22 20:01:31.266626 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.266631 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.266635 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.266640 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:31.266644 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:31.266649 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:31.266653 | orchestrator | 2025-06-22 20:01:31.266658 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-22 20:01:31.266662 | orchestrator | Sunday 22 June 2025 19:59:02 +0000 (0:00:00.598) 0:09:13.433 *********** 2025-06-22 20:01:31.266670 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.266674 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.266679 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.266683 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:01:31.266688 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:01:31.266692 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:01:31.266697 | orchestrator | 2025-06-22 20:01:31.266702 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-22 20:01:31.266706 | orchestrator | Sunday 22 June 2025 19:59:03 +0000 (0:00:00.823) 0:09:14.257 *********** 2025-06-22 20:01:31.266714 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:31.266719 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:31.266723 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:31.266728 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:01:31.266733 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:01:31.266737 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:01:31.266742 | orchestrator | 2025-06-22 20:01:31.266746 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-22 20:01:31.266751 | orchestrator | Sunday 22 June 2025 19:59:04 +0000 (0:00:00.609) 0:09:14.866 *********** 2025-06-22 20:01:31.266755 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:31.266760 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:31.266764 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:31.266769 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:01:31.266773 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:01:31.266778 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:01:31.266782 | orchestrator | 2025-06-22 20:01:31.266787 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2025-06-22 20:01:31.266792 | orchestrator | Sunday 22 June 2025 19:59:05 +0000 (0:00:01.248) 0:09:16.115 *********** 2025-06-22 20:01:31.266796 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-22 20:01:31.266801 | orchestrator | 2025-06-22 20:01:31.266805 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2025-06-22 20:01:31.266810 | orchestrator | Sunday 22 June 2025 19:59:09 +0000 (0:00:04.147) 0:09:20.262 *********** 2025-06-22 20:01:31.266815 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-22 20:01:31.266819 | orchestrator | 2025-06-22 20:01:31.266824 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2025-06-22 20:01:31.266828 | orchestrator | Sunday 22 June 2025 19:59:11 +0000 (0:00:02.034) 0:09:22.297 *********** 2025-06-22 20:01:31.266833 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:01:31.266837 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:01:31.266842 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:01:31.266847 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:01:31.266851 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:01:31.266856 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:01:31.266860 | orchestrator | 2025-06-22 20:01:31.266865 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2025-06-22 20:01:31.266870 | orchestrator | Sunday 22 June 2025 19:59:13 +0000 (0:00:01.928) 0:09:24.226 *********** 2025-06-22 20:01:31.266874 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:01:31.266879 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:01:31.266883 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:01:31.266888 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:01:31.266892 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:01:31.266897 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:01:31.266901 | orchestrator | 2025-06-22 20:01:31.266906 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2025-06-22 20:01:31.266910 | orchestrator | Sunday 22 June 2025 19:59:14 +0000 (0:00:00.959) 0:09:25.185 *********** 2025-06-22 20:01:31.266915 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:01:31.266920 | orchestrator | 2025-06-22 20:01:31.266925 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2025-06-22 20:01:31.266929 | orchestrator | Sunday 22 June 2025 19:59:15 +0000 (0:00:01.262) 0:09:26.448 *********** 2025-06-22 20:01:31.266934 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:01:31.266938 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:01:31.266943 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:01:31.266947 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:01:31.266952 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:01:31.266956 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:01:31.266964 | orchestrator | 2025-06-22 20:01:31.266969 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2025-06-22 20:01:31.266973 | orchestrator | Sunday 22 June 2025 19:59:17 +0000 (0:00:02.015) 0:09:28.463 *********** 2025-06-22 20:01:31.266978 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:01:31.266983 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:01:31.266987 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:01:31.266992 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:01:31.266996 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:01:31.267000 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:01:31.267005 | orchestrator | 2025-06-22 20:01:31.267009 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2025-06-22 20:01:31.267017 | orchestrator | Sunday 22 June 2025 19:59:21 +0000 (0:00:03.548) 0:09:32.011 *********** 2025-06-22 20:01:31.267022 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:01:31.267026 | orchestrator | 2025-06-22 20:01:31.267031 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2025-06-22 20:01:31.267036 | orchestrator | Sunday 22 June 2025 19:59:22 +0000 (0:00:01.414) 0:09:33.425 *********** 2025-06-22 20:01:31.267040 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:31.267045 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:31.267049 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:31.267054 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:01:31.267058 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:01:31.267063 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:01:31.267067 | orchestrator | 2025-06-22 20:01:31.267072 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2025-06-22 20:01:31.267077 | orchestrator | Sunday 22 June 2025 19:59:23 +0000 (0:00:01.002) 0:09:34.428 *********** 2025-06-22 20:01:31.267084 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:01:31.267088 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:01:31.267093 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:01:31.267097 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:01:31.267102 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:01:31.267107 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:01:31.267111 | orchestrator | 2025-06-22 20:01:31.267116 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2025-06-22 20:01:31.267120 | orchestrator | Sunday 22 June 2025 19:59:26 +0000 (0:00:02.608) 0:09:37.036 *********** 2025-06-22 20:01:31.267125 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:31.267129 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:31.267134 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:31.267148 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:01:31.267153 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:01:31.267157 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:01:31.267162 | orchestrator | 2025-06-22 20:01:31.267167 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-06-22 20:01:31.267171 | orchestrator | 2025-06-22 20:01:31.267176 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-22 20:01:31.267180 | orchestrator | Sunday 22 June 2025 19:59:27 +0000 (0:00:01.079) 0:09:38.116 *********** 2025-06-22 20:01:31.267185 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 20:01:31.267190 | orchestrator | 2025-06-22 20:01:31.267194 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-22 20:01:31.267199 | orchestrator | Sunday 22 June 2025 19:59:27 +0000 (0:00:00.492) 0:09:38.608 *********** 2025-06-22 20:01:31.267203 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 20:01:31.267208 | orchestrator | 2025-06-22 20:01:31.267213 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-22 20:01:31.267221 | orchestrator | Sunday 22 June 2025 19:59:28 +0000 (0:00:00.892) 0:09:39.501 *********** 2025-06-22 20:01:31.267225 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.267230 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.267235 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.267239 | orchestrator | 2025-06-22 20:01:31.267244 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-22 20:01:31.267248 | orchestrator | Sunday 22 June 2025 19:59:29 +0000 (0:00:00.312) 0:09:39.813 *********** 2025-06-22 20:01:31.267253 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:31.267258 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:31.267262 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:31.267267 | orchestrator | 2025-06-22 20:01:31.267271 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-22 20:01:31.267276 | orchestrator | Sunday 22 June 2025 19:59:29 +0000 (0:00:00.734) 0:09:40.548 *********** 2025-06-22 20:01:31.267281 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:31.267285 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:31.267290 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:31.267294 | orchestrator | 2025-06-22 20:01:31.267299 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-22 20:01:31.267303 | orchestrator | Sunday 22 June 2025 19:59:31 +0000 (0:00:01.114) 0:09:41.663 *********** 2025-06-22 20:01:31.267308 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:31.267312 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:31.267317 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:31.267321 | orchestrator | 2025-06-22 20:01:31.267326 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-22 20:01:31.267330 | orchestrator | Sunday 22 June 2025 19:59:31 +0000 (0:00:00.802) 0:09:42.465 *********** 2025-06-22 20:01:31.267335 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.267340 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.267344 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.267349 | orchestrator | 2025-06-22 20:01:31.267353 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-22 20:01:31.267358 | orchestrator | Sunday 22 June 2025 19:59:32 +0000 (0:00:00.343) 0:09:42.808 *********** 2025-06-22 20:01:31.267363 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.267367 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.267372 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.267376 | orchestrator | 2025-06-22 20:01:31.267381 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-22 20:01:31.267385 | orchestrator | Sunday 22 June 2025 19:59:32 +0000 (0:00:00.315) 0:09:43.124 *********** 2025-06-22 20:01:31.267390 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.267394 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.267399 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.267404 | orchestrator | 2025-06-22 20:01:31.267408 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-22 20:01:31.267413 | orchestrator | Sunday 22 June 2025 19:59:33 +0000 (0:00:00.615) 0:09:43.740 *********** 2025-06-22 20:01:31.267417 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:31.267422 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:31.267429 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:31.267433 | orchestrator | 2025-06-22 20:01:31.267438 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-22 20:01:31.267442 | orchestrator | Sunday 22 June 2025 19:59:33 +0000 (0:00:00.821) 0:09:44.562 *********** 2025-06-22 20:01:31.267447 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:31.267452 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:31.267456 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:31.267461 | orchestrator | 2025-06-22 20:01:31.267465 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-22 20:01:31.267470 | orchestrator | Sunday 22 June 2025 19:59:34 +0000 (0:00:00.790) 0:09:45.352 *********** 2025-06-22 20:01:31.267478 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.267483 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.267487 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.267492 | orchestrator | 2025-06-22 20:01:31.267496 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-22 20:01:31.267501 | orchestrator | Sunday 22 June 2025 19:59:35 +0000 (0:00:00.499) 0:09:45.852 *********** 2025-06-22 20:01:31.267508 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.267513 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.267517 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.267522 | orchestrator | 2025-06-22 20:01:31.267526 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-22 20:01:31.267531 | orchestrator | Sunday 22 June 2025 19:59:35 +0000 (0:00:00.730) 0:09:46.582 *********** 2025-06-22 20:01:31.267536 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:31.267540 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:31.267545 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:31.267549 | orchestrator | 2025-06-22 20:01:31.267554 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-22 20:01:31.267559 | orchestrator | Sunday 22 June 2025 19:59:36 +0000 (0:00:00.463) 0:09:47.046 *********** 2025-06-22 20:01:31.267563 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:31.267568 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:31.267572 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:31.267577 | orchestrator | 2025-06-22 20:01:31.267581 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-22 20:01:31.267586 | orchestrator | Sunday 22 June 2025 19:59:36 +0000 (0:00:00.455) 0:09:47.501 *********** 2025-06-22 20:01:31.267591 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:31.267595 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:31.267600 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:31.267604 | orchestrator | 2025-06-22 20:01:31.267609 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-22 20:01:31.267613 | orchestrator | Sunday 22 June 2025 19:59:37 +0000 (0:00:00.481) 0:09:47.983 *********** 2025-06-22 20:01:31.267618 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.267622 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.267627 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.267632 | orchestrator | 2025-06-22 20:01:31.267636 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-22 20:01:31.267641 | orchestrator | Sunday 22 June 2025 19:59:38 +0000 (0:00:00.834) 0:09:48.818 *********** 2025-06-22 20:01:31.267645 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.267650 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.267654 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.267659 | orchestrator | 2025-06-22 20:01:31.267664 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-22 20:01:31.267668 | orchestrator | Sunday 22 June 2025 19:59:38 +0000 (0:00:00.397) 0:09:49.215 *********** 2025-06-22 20:01:31.267673 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.267677 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.267682 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.267686 | orchestrator | 2025-06-22 20:01:31.267691 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-22 20:01:31.267696 | orchestrator | Sunday 22 June 2025 19:59:39 +0000 (0:00:00.417) 0:09:49.633 *********** 2025-06-22 20:01:31.267700 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:31.267705 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:31.267709 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:31.267714 | orchestrator | 2025-06-22 20:01:31.267718 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-22 20:01:31.267723 | orchestrator | Sunday 22 June 2025 19:59:39 +0000 (0:00:00.349) 0:09:49.982 *********** 2025-06-22 20:01:31.267728 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:31.267732 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:31.267740 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:31.267745 | orchestrator | 2025-06-22 20:01:31.267750 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2025-06-22 20:01:31.267754 | orchestrator | Sunday 22 June 2025 19:59:39 +0000 (0:00:00.643) 0:09:50.626 *********** 2025-06-22 20:01:31.267759 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.267764 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.267768 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-06-22 20:01:31.267773 | orchestrator | 2025-06-22 20:01:31.267777 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2025-06-22 20:01:31.267782 | orchestrator | Sunday 22 June 2025 19:59:40 +0000 (0:00:00.455) 0:09:51.081 *********** 2025-06-22 20:01:31.267786 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-22 20:01:31.267791 | orchestrator | 2025-06-22 20:01:31.267796 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2025-06-22 20:01:31.267800 | orchestrator | Sunday 22 June 2025 19:59:42 +0000 (0:00:02.356) 0:09:53.437 *********** 2025-06-22 20:01:31.267805 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-06-22 20:01:31.267812 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.267816 | orchestrator | 2025-06-22 20:01:31.267821 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2025-06-22 20:01:31.267829 | orchestrator | Sunday 22 June 2025 19:59:43 +0000 (0:00:00.351) 0:09:53.789 *********** 2025-06-22 20:01:31.267835 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-22 20:01:31.267844 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-22 20:01:31.267849 | orchestrator | 2025-06-22 20:01:31.267853 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2025-06-22 20:01:31.267861 | orchestrator | Sunday 22 June 2025 19:59:51 +0000 (0:00:08.596) 0:10:02.385 *********** 2025-06-22 20:01:31.267866 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-22 20:01:31.267870 | orchestrator | 2025-06-22 20:01:31.267875 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2025-06-22 20:01:31.267880 | orchestrator | Sunday 22 June 2025 19:59:55 +0000 (0:00:03.593) 0:10:05.979 *********** 2025-06-22 20:01:31.267884 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 20:01:31.267889 | orchestrator | 2025-06-22 20:01:31.267893 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2025-06-22 20:01:31.267898 | orchestrator | Sunday 22 June 2025 19:59:55 +0000 (0:00:00.465) 0:10:06.444 *********** 2025-06-22 20:01:31.267902 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-06-22 20:01:31.267907 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-06-22 20:01:31.267911 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-06-22 20:01:31.267916 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-06-22 20:01:31.267920 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-06-22 20:01:31.267925 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-06-22 20:01:31.267930 | orchestrator | 2025-06-22 20:01:31.267934 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2025-06-22 20:01:31.267942 | orchestrator | Sunday 22 June 2025 19:59:56 +0000 (0:00:01.016) 0:10:07.461 *********** 2025-06-22 20:01:31.267947 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 20:01:31.267951 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-22 20:01:31.267956 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-22 20:01:31.267961 | orchestrator | 2025-06-22 20:01:31.267965 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2025-06-22 20:01:31.267970 | orchestrator | Sunday 22 June 2025 19:59:59 +0000 (0:00:02.218) 0:10:09.679 *********** 2025-06-22 20:01:31.267974 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-22 20:01:31.267979 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-22 20:01:31.267983 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:01:31.267988 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-22 20:01:31.267993 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-06-22 20:01:31.267997 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:01:31.268002 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-22 20:01:31.268006 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-06-22 20:01:31.268011 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:01:31.268016 | orchestrator | 2025-06-22 20:01:31.268020 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2025-06-22 20:01:31.268025 | orchestrator | Sunday 22 June 2025 20:00:00 +0000 (0:00:01.444) 0:10:11.123 *********** 2025-06-22 20:01:31.268029 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:01:31.268034 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:01:31.268039 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:01:31.268043 | orchestrator | 2025-06-22 20:01:31.268048 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2025-06-22 20:01:31.268052 | orchestrator | Sunday 22 June 2025 20:00:03 +0000 (0:00:02.687) 0:10:13.811 *********** 2025-06-22 20:01:31.268057 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.268061 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.268066 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.268070 | orchestrator | 2025-06-22 20:01:31.268075 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2025-06-22 20:01:31.268080 | orchestrator | Sunday 22 June 2025 20:00:03 +0000 (0:00:00.308) 0:10:14.119 *********** 2025-06-22 20:01:31.268084 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 20:01:31.268089 | orchestrator | 2025-06-22 20:01:31.268093 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2025-06-22 20:01:31.268098 | orchestrator | Sunday 22 June 2025 20:00:04 +0000 (0:00:00.798) 0:10:14.917 *********** 2025-06-22 20:01:31.268102 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 20:01:31.268107 | orchestrator | 2025-06-22 20:01:31.268112 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2025-06-22 20:01:31.268116 | orchestrator | Sunday 22 June 2025 20:00:04 +0000 (0:00:00.514) 0:10:15.432 *********** 2025-06-22 20:01:31.268121 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:01:31.268125 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:01:31.268130 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:01:31.268134 | orchestrator | 2025-06-22 20:01:31.268151 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2025-06-22 20:01:31.268156 | orchestrator | Sunday 22 June 2025 20:00:06 +0000 (0:00:01.243) 0:10:16.675 *********** 2025-06-22 20:01:31.268161 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:01:31.268165 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:01:31.268170 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:01:31.268174 | orchestrator | 2025-06-22 20:01:31.268179 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2025-06-22 20:01:31.268187 | orchestrator | Sunday 22 June 2025 20:00:07 +0000 (0:00:01.415) 0:10:18.091 *********** 2025-06-22 20:01:31.268192 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:01:31.268196 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:01:31.268201 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:01:31.268206 | orchestrator | 2025-06-22 20:01:31.268210 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2025-06-22 20:01:31.268215 | orchestrator | Sunday 22 June 2025 20:00:09 +0000 (0:00:01.816) 0:10:19.908 *********** 2025-06-22 20:01:31.268219 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:01:31.268227 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:01:31.268232 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:01:31.268236 | orchestrator | 2025-06-22 20:01:31.268241 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2025-06-22 20:01:31.268245 | orchestrator | Sunday 22 June 2025 20:00:11 +0000 (0:00:02.074) 0:10:21.982 *********** 2025-06-22 20:01:31.268250 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:31.268254 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:31.268259 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:31.268264 | orchestrator | 2025-06-22 20:01:31.268268 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-22 20:01:31.268273 | orchestrator | Sunday 22 June 2025 20:00:12 +0000 (0:00:01.298) 0:10:23.280 *********** 2025-06-22 20:01:31.268277 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:01:31.268282 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:01:31.268286 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:01:31.268291 | orchestrator | 2025-06-22 20:01:31.268296 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-06-22 20:01:31.268300 | orchestrator | Sunday 22 June 2025 20:00:13 +0000 (0:00:00.651) 0:10:23.932 *********** 2025-06-22 20:01:31.268305 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 20:01:31.268309 | orchestrator | 2025-06-22 20:01:31.268314 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-06-22 20:01:31.268319 | orchestrator | Sunday 22 June 2025 20:00:14 +0000 (0:00:00.803) 0:10:24.736 *********** 2025-06-22 20:01:31.268323 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:31.268328 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:31.268332 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:31.268337 | orchestrator | 2025-06-22 20:01:31.268341 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-06-22 20:01:31.268346 | orchestrator | Sunday 22 June 2025 20:00:14 +0000 (0:00:00.326) 0:10:25.063 *********** 2025-06-22 20:01:31.268351 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:01:31.268355 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:01:31.268360 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:01:31.268364 | orchestrator | 2025-06-22 20:01:31.268369 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-06-22 20:01:31.268374 | orchestrator | Sunday 22 June 2025 20:00:15 +0000 (0:00:01.304) 0:10:26.367 *********** 2025-06-22 20:01:31.268378 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-22 20:01:31.268383 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-22 20:01:31.268387 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-22 20:01:31.268392 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.268396 | orchestrator | 2025-06-22 20:01:31.268401 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-06-22 20:01:31.268406 | orchestrator | Sunday 22 June 2025 20:00:16 +0000 (0:00:00.899) 0:10:27.267 *********** 2025-06-22 20:01:31.268410 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:31.268415 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:31.268419 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:31.268424 | orchestrator | 2025-06-22 20:01:31.268428 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-06-22 20:01:31.268436 | orchestrator | 2025-06-22 20:01:31.268441 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-06-22 20:01:31.268446 | orchestrator | Sunday 22 June 2025 20:00:17 +0000 (0:00:00.808) 0:10:28.076 *********** 2025-06-22 20:01:31.268450 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 20:01:31.268455 | orchestrator | 2025-06-22 20:01:31.268459 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-06-22 20:01:31.268464 | orchestrator | Sunday 22 June 2025 20:00:18 +0000 (0:00:00.569) 0:10:28.645 *********** 2025-06-22 20:01:31.268469 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 20:01:31.268473 | orchestrator | 2025-06-22 20:01:31.268478 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-06-22 20:01:31.268483 | orchestrator | Sunday 22 June 2025 20:00:18 +0000 (0:00:00.742) 0:10:29.388 *********** 2025-06-22 20:01:31.268487 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.268492 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.268496 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.268501 | orchestrator | 2025-06-22 20:01:31.268505 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-06-22 20:01:31.268510 | orchestrator | Sunday 22 June 2025 20:00:19 +0000 (0:00:00.311) 0:10:29.700 *********** 2025-06-22 20:01:31.268515 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:31.268519 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:31.268524 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:31.268528 | orchestrator | 2025-06-22 20:01:31.268533 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-06-22 20:01:31.268540 | orchestrator | Sunday 22 June 2025 20:00:19 +0000 (0:00:00.750) 0:10:30.450 *********** 2025-06-22 20:01:31.268545 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:31.268549 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:31.268554 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:31.268558 | orchestrator | 2025-06-22 20:01:31.268563 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-06-22 20:01:31.268568 | orchestrator | Sunday 22 June 2025 20:00:20 +0000 (0:00:00.769) 0:10:31.219 *********** 2025-06-22 20:01:31.268572 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:31.268577 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:31.268581 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:31.268586 | orchestrator | 2025-06-22 20:01:31.268590 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-06-22 20:01:31.268595 | orchestrator | Sunday 22 June 2025 20:00:21 +0000 (0:00:00.874) 0:10:32.094 *********** 2025-06-22 20:01:31.268600 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.268604 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.268609 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.268613 | orchestrator | 2025-06-22 20:01:31.268621 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-06-22 20:01:31.268626 | orchestrator | Sunday 22 June 2025 20:00:21 +0000 (0:00:00.269) 0:10:32.363 *********** 2025-06-22 20:01:31.268630 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.268635 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.268639 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.268644 | orchestrator | 2025-06-22 20:01:31.268649 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-06-22 20:01:31.268653 | orchestrator | Sunday 22 June 2025 20:00:22 +0000 (0:00:00.307) 0:10:32.671 *********** 2025-06-22 20:01:31.268658 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.268662 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.268667 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.268672 | orchestrator | 2025-06-22 20:01:31.268676 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-06-22 20:01:31.268684 | orchestrator | Sunday 22 June 2025 20:00:22 +0000 (0:00:00.284) 0:10:32.955 *********** 2025-06-22 20:01:31.268689 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:31.268694 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:31.268698 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:31.268703 | orchestrator | 2025-06-22 20:01:31.268707 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-06-22 20:01:31.268712 | orchestrator | Sunday 22 June 2025 20:00:23 +0000 (0:00:00.922) 0:10:33.878 *********** 2025-06-22 20:01:31.268717 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:31.268721 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:31.268726 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:31.268730 | orchestrator | 2025-06-22 20:01:31.268735 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-06-22 20:01:31.268740 | orchestrator | Sunday 22 June 2025 20:00:23 +0000 (0:00:00.719) 0:10:34.598 *********** 2025-06-22 20:01:31.268744 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.268749 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.268753 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.268758 | orchestrator | 2025-06-22 20:01:31.268762 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-06-22 20:01:31.268767 | orchestrator | Sunday 22 June 2025 20:00:24 +0000 (0:00:00.324) 0:10:34.922 *********** 2025-06-22 20:01:31.268772 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.268776 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.268781 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.268785 | orchestrator | 2025-06-22 20:01:31.268790 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-06-22 20:01:31.268794 | orchestrator | Sunday 22 June 2025 20:00:24 +0000 (0:00:00.340) 0:10:35.263 *********** 2025-06-22 20:01:31.268799 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:31.268804 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:31.268808 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:31.268813 | orchestrator | 2025-06-22 20:01:31.268817 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-06-22 20:01:31.268822 | orchestrator | Sunday 22 June 2025 20:00:25 +0000 (0:00:00.611) 0:10:35.874 *********** 2025-06-22 20:01:31.268827 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:31.268831 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:31.268836 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:31.268840 | orchestrator | 2025-06-22 20:01:31.268845 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-06-22 20:01:31.268849 | orchestrator | Sunday 22 June 2025 20:00:25 +0000 (0:00:00.351) 0:10:36.226 *********** 2025-06-22 20:01:31.268854 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:31.268858 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:31.268863 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:31.268867 | orchestrator | 2025-06-22 20:01:31.268872 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-06-22 20:01:31.268876 | orchestrator | Sunday 22 June 2025 20:00:25 +0000 (0:00:00.320) 0:10:36.546 *********** 2025-06-22 20:01:31.268881 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.268885 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.268890 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.268895 | orchestrator | 2025-06-22 20:01:31.268899 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-06-22 20:01:31.268904 | orchestrator | Sunday 22 June 2025 20:00:26 +0000 (0:00:00.312) 0:10:36.859 *********** 2025-06-22 20:01:31.268908 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.268913 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.268917 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.268922 | orchestrator | 2025-06-22 20:01:31.268926 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-06-22 20:01:31.268931 | orchestrator | Sunday 22 June 2025 20:00:26 +0000 (0:00:00.602) 0:10:37.461 *********** 2025-06-22 20:01:31.268935 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.268943 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.268947 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.268952 | orchestrator | 2025-06-22 20:01:31.268957 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-06-22 20:01:31.268964 | orchestrator | Sunday 22 June 2025 20:00:27 +0000 (0:00:00.345) 0:10:37.807 *********** 2025-06-22 20:01:31.268969 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:31.268973 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:31.268978 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:31.268982 | orchestrator | 2025-06-22 20:01:31.268987 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-06-22 20:01:31.268991 | orchestrator | Sunday 22 June 2025 20:00:27 +0000 (0:00:00.325) 0:10:38.132 *********** 2025-06-22 20:01:31.268996 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:31.269000 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:31.269005 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:31.269009 | orchestrator | 2025-06-22 20:01:31.269014 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2025-06-22 20:01:31.269019 | orchestrator | Sunday 22 June 2025 20:00:28 +0000 (0:00:00.853) 0:10:38.985 *********** 2025-06-22 20:01:31.269023 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 20:01:31.269028 | orchestrator | 2025-06-22 20:01:31.269032 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-06-22 20:01:31.269040 | orchestrator | Sunday 22 June 2025 20:00:29 +0000 (0:00:00.728) 0:10:39.714 *********** 2025-06-22 20:01:31.269044 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 20:01:31.269049 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-22 20:01:31.269054 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-22 20:01:31.269058 | orchestrator | 2025-06-22 20:01:31.269063 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-06-22 20:01:31.269067 | orchestrator | Sunday 22 June 2025 20:00:31 +0000 (0:00:02.663) 0:10:42.377 *********** 2025-06-22 20:01:31.269072 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-22 20:01:31.269076 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-06-22 20:01:31.269081 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:01:31.269085 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-22 20:01:31.269090 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-06-22 20:01:31.269095 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:01:31.269099 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-22 20:01:31.269104 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-06-22 20:01:31.269108 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:01:31.269113 | orchestrator | 2025-06-22 20:01:31.269117 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2025-06-22 20:01:31.269122 | orchestrator | Sunday 22 June 2025 20:00:33 +0000 (0:00:01.592) 0:10:43.970 *********** 2025-06-22 20:01:31.269127 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.269131 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.269136 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.269166 | orchestrator | 2025-06-22 20:01:31.269171 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2025-06-22 20:01:31.269176 | orchestrator | Sunday 22 June 2025 20:00:33 +0000 (0:00:00.348) 0:10:44.318 *********** 2025-06-22 20:01:31.269180 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 20:01:31.269185 | orchestrator | 2025-06-22 20:01:31.269190 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2025-06-22 20:01:31.269194 | orchestrator | Sunday 22 June 2025 20:00:34 +0000 (0:00:00.681) 0:10:45.000 *********** 2025-06-22 20:01:31.269199 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-06-22 20:01:31.269207 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-06-22 20:01:31.269212 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-06-22 20:01:31.269216 | orchestrator | 2025-06-22 20:01:31.269221 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2025-06-22 20:01:31.269226 | orchestrator | Sunday 22 June 2025 20:00:35 +0000 (0:00:01.525) 0:10:46.526 *********** 2025-06-22 20:01:31.269230 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 20:01:31.269235 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-06-22 20:01:31.269239 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 20:01:31.269244 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-06-22 20:01:31.269249 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 20:01:31.269253 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-06-22 20:01:31.269258 | orchestrator | 2025-06-22 20:01:31.269262 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-06-22 20:01:31.269267 | orchestrator | Sunday 22 June 2025 20:00:40 +0000 (0:00:04.676) 0:10:51.203 *********** 2025-06-22 20:01:31.269272 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 20:01:31.269276 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-22 20:01:31.269281 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 20:01:31.269288 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-22 20:01:31.269293 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 20:01:31.269297 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-22 20:01:31.269302 | orchestrator | 2025-06-22 20:01:31.269306 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-06-22 20:01:31.269311 | orchestrator | Sunday 22 June 2025 20:00:43 +0000 (0:00:02.438) 0:10:53.642 *********** 2025-06-22 20:01:31.269315 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-22 20:01:31.269320 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:01:31.269324 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-22 20:01:31.269329 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:01:31.269334 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-22 20:01:31.269338 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:01:31.269343 | orchestrator | 2025-06-22 20:01:31.269347 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2025-06-22 20:01:31.269355 | orchestrator | Sunday 22 June 2025 20:00:44 +0000 (0:00:01.274) 0:10:54.916 *********** 2025-06-22 20:01:31.269360 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-06-22 20:01:31.269364 | orchestrator | 2025-06-22 20:01:31.269369 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2025-06-22 20:01:31.269373 | orchestrator | Sunday 22 June 2025 20:00:44 +0000 (0:00:00.223) 0:10:55.140 *********** 2025-06-22 20:01:31.269378 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-22 20:01:31.269383 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-22 20:01:31.269391 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-22 20:01:31.269396 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-22 20:01:31.269401 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-22 20:01:31.269405 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.269410 | orchestrator | 2025-06-22 20:01:31.269414 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2025-06-22 20:01:31.269419 | orchestrator | Sunday 22 June 2025 20:00:45 +0000 (0:00:01.150) 0:10:56.291 *********** 2025-06-22 20:01:31.269423 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-22 20:01:31.269428 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-22 20:01:31.269433 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-22 20:01:31.269437 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-22 20:01:31.269442 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-06-22 20:01:31.269446 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.269451 | orchestrator | 2025-06-22 20:01:31.269455 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2025-06-22 20:01:31.269460 | orchestrator | Sunday 22 June 2025 20:00:46 +0000 (0:00:00.743) 0:10:57.035 *********** 2025-06-22 20:01:31.269465 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-06-22 20:01:31.269469 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-06-22 20:01:31.269474 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-06-22 20:01:31.269478 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-06-22 20:01:31.269483 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-06-22 20:01:31.269488 | orchestrator | 2025-06-22 20:01:31.269492 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2025-06-22 20:01:31.269497 | orchestrator | Sunday 22 June 2025 20:01:16 +0000 (0:00:30.430) 0:11:27.465 *********** 2025-06-22 20:01:31.269501 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.269506 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.269510 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.269515 | orchestrator | 2025-06-22 20:01:31.269519 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2025-06-22 20:01:31.269527 | orchestrator | Sunday 22 June 2025 20:01:17 +0000 (0:00:00.347) 0:11:27.813 *********** 2025-06-22 20:01:31.269532 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.269536 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.269541 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.269545 | orchestrator | 2025-06-22 20:01:31.269550 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2025-06-22 20:01:31.269554 | orchestrator | Sunday 22 June 2025 20:01:17 +0000 (0:00:00.367) 0:11:28.180 *********** 2025-06-22 20:01:31.269562 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 20:01:31.269567 | orchestrator | 2025-06-22 20:01:31.269572 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2025-06-22 20:01:31.269576 | orchestrator | Sunday 22 June 2025 20:01:18 +0000 (0:00:00.771) 0:11:28.951 *********** 2025-06-22 20:01:31.269581 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 20:01:31.269585 | orchestrator | 2025-06-22 20:01:31.269592 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2025-06-22 20:01:31.269597 | orchestrator | Sunday 22 June 2025 20:01:18 +0000 (0:00:00.529) 0:11:29.481 *********** 2025-06-22 20:01:31.269602 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:01:31.269606 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:01:31.269611 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:01:31.269615 | orchestrator | 2025-06-22 20:01:31.269620 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2025-06-22 20:01:31.269625 | orchestrator | Sunday 22 June 2025 20:01:20 +0000 (0:00:01.363) 0:11:30.844 *********** 2025-06-22 20:01:31.269629 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:01:31.269634 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:01:31.269638 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:01:31.269643 | orchestrator | 2025-06-22 20:01:31.269647 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2025-06-22 20:01:31.269652 | orchestrator | Sunday 22 June 2025 20:01:21 +0000 (0:00:01.410) 0:11:32.255 *********** 2025-06-22 20:01:31.269657 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:01:31.269661 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:01:31.269666 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:01:31.269670 | orchestrator | 2025-06-22 20:01:31.269675 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2025-06-22 20:01:31.269679 | orchestrator | Sunday 22 June 2025 20:01:23 +0000 (0:00:02.177) 0:11:34.432 *********** 2025-06-22 20:01:31.269683 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-06-22 20:01:31.269687 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-06-22 20:01:31.269691 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-06-22 20:01:31.269696 | orchestrator | 2025-06-22 20:01:31.269700 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-06-22 20:01:31.269704 | orchestrator | Sunday 22 June 2025 20:01:26 +0000 (0:00:02.928) 0:11:37.361 *********** 2025-06-22 20:01:31.269708 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.269712 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.269716 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.269720 | orchestrator | 2025-06-22 20:01:31.269724 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-06-22 20:01:31.269729 | orchestrator | Sunday 22 June 2025 20:01:27 +0000 (0:00:00.359) 0:11:37.720 *********** 2025-06-22 20:01:31.269733 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 20:01:31.269737 | orchestrator | 2025-06-22 20:01:31.269741 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-06-22 20:01:31.269745 | orchestrator | Sunday 22 June 2025 20:01:27 +0000 (0:00:00.551) 0:11:38.272 *********** 2025-06-22 20:01:31.269749 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:31.269753 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:31.269757 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:31.269761 | orchestrator | 2025-06-22 20:01:31.269766 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-06-22 20:01:31.269773 | orchestrator | Sunday 22 June 2025 20:01:28 +0000 (0:00:00.721) 0:11:38.993 *********** 2025-06-22 20:01:31.269777 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.269781 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:01:31.269785 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:01:31.269789 | orchestrator | 2025-06-22 20:01:31.269793 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-06-22 20:01:31.269798 | orchestrator | Sunday 22 June 2025 20:01:28 +0000 (0:00:00.406) 0:11:39.399 *********** 2025-06-22 20:01:31.269802 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-22 20:01:31.269806 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-22 20:01:31.269810 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-22 20:01:31.269814 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:01:31.269818 | orchestrator | 2025-06-22 20:01:31.269822 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-06-22 20:01:31.269827 | orchestrator | Sunday 22 June 2025 20:01:29 +0000 (0:00:00.725) 0:11:40.125 *********** 2025-06-22 20:01:31.269831 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:01:31.269835 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:01:31.269839 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:01:31.269843 | orchestrator | 2025-06-22 20:01:31.269847 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 20:01:31.269852 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2025-06-22 20:01:31.269859 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2025-06-22 20:01:31.269864 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2025-06-22 20:01:31.269868 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2025-06-22 20:01:31.269872 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2025-06-22 20:01:31.269879 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2025-06-22 20:01:31.269884 | orchestrator | 2025-06-22 20:01:31.269888 | orchestrator | 2025-06-22 20:01:31.269892 | orchestrator | 2025-06-22 20:01:31.269896 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 20:01:31.269900 | orchestrator | Sunday 22 June 2025 20:01:29 +0000 (0:00:00.264) 0:11:40.390 *********** 2025-06-22 20:01:31.269904 | orchestrator | =============================================================================== 2025-06-22 20:01:31.269909 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 73.08s 2025-06-22 20:01:31.269913 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 44.22s 2025-06-22 20:01:31.269917 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 36.31s 2025-06-22 20:01:31.269921 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 30.43s 2025-06-22 20:01:31.269925 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.92s 2025-06-22 20:01:31.269929 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 15.38s 2025-06-22 20:01:31.269933 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 13.00s 2025-06-22 20:01:31.269937 | orchestrator | ceph-mon : Fetch ceph initial keys ------------------------------------- 10.73s 2025-06-22 20:01:31.269941 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.52s 2025-06-22 20:01:31.269945 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.60s 2025-06-22 20:01:31.269952 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 7.92s 2025-06-22 20:01:31.269957 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.32s 2025-06-22 20:01:31.269961 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 5.04s 2025-06-22 20:01:31.269965 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.68s 2025-06-22 20:01:31.269969 | orchestrator | ceph-osd : Apply operating system tuning -------------------------------- 4.16s 2025-06-22 20:01:31.269973 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.15s 2025-06-22 20:01:31.269977 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 3.97s 2025-06-22 20:01:31.269981 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.85s 2025-06-22 20:01:31.269985 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.78s 2025-06-22 20:01:31.269989 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.59s 2025-06-22 20:01:34.297584 | orchestrator | 2025-06-22 20:01:34 | INFO  | Task d1070b7d-00f3-4007-b935-daeb69c70ff8 is in state STARTED 2025-06-22 20:01:34.300458 | orchestrator | 2025-06-22 20:01:34 | INFO  | Task c203ef30-6952-492e-b47a-51e32726175b is in state STARTED 2025-06-22 20:01:34.303902 | orchestrator | 2025-06-22 20:01:34 | INFO  | Task 38d54694-95db-4844-bb72-8967e27ceda1 is in state STARTED 2025-06-22 20:01:34.304108 | orchestrator | 2025-06-22 20:01:34 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:01:37.351487 | orchestrator | 2025-06-22 20:01:37 | INFO  | Task d1070b7d-00f3-4007-b935-daeb69c70ff8 is in state STARTED 2025-06-22 20:01:37.353730 | orchestrator | 2025-06-22 20:01:37 | INFO  | Task c203ef30-6952-492e-b47a-51e32726175b is in state STARTED 2025-06-22 20:01:37.356856 | orchestrator | 2025-06-22 20:01:37 | INFO  | Task 38d54694-95db-4844-bb72-8967e27ceda1 is in state STARTED 2025-06-22 20:01:37.357349 | orchestrator | 2025-06-22 20:01:37 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:01:40.413753 | orchestrator | 2025-06-22 20:01:40 | INFO  | Task d1070b7d-00f3-4007-b935-daeb69c70ff8 is in state STARTED 2025-06-22 20:01:40.413857 | orchestrator | 2025-06-22 20:01:40 | INFO  | Task c203ef30-6952-492e-b47a-51e32726175b is in state STARTED 2025-06-22 20:01:40.415699 | orchestrator | 2025-06-22 20:01:40 | INFO  | Task 38d54694-95db-4844-bb72-8967e27ceda1 is in state STARTED 2025-06-22 20:01:40.415733 | orchestrator | 2025-06-22 20:01:40 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:01:43.469290 | orchestrator | 2025-06-22 20:01:43 | INFO  | Task d1070b7d-00f3-4007-b935-daeb69c70ff8 is in state STARTED 2025-06-22 20:01:43.470703 | orchestrator | 2025-06-22 20:01:43 | INFO  | Task c203ef30-6952-492e-b47a-51e32726175b is in state STARTED 2025-06-22 20:01:43.472598 | orchestrator | 2025-06-22 20:01:43 | INFO  | Task 38d54694-95db-4844-bb72-8967e27ceda1 is in state STARTED 2025-06-22 20:01:43.472688 | orchestrator | 2025-06-22 20:01:43 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:01:46.521495 | orchestrator | 2025-06-22 20:01:46 | INFO  | Task d1070b7d-00f3-4007-b935-daeb69c70ff8 is in state STARTED 2025-06-22 20:01:46.522957 | orchestrator | 2025-06-22 20:01:46 | INFO  | Task c203ef30-6952-492e-b47a-51e32726175b is in state STARTED 2025-06-22 20:01:46.524561 | orchestrator | 2025-06-22 20:01:46 | INFO  | Task 38d54694-95db-4844-bb72-8967e27ceda1 is in state STARTED 2025-06-22 20:01:46.524791 | orchestrator | 2025-06-22 20:01:46 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:01:49.571323 | orchestrator | 2025-06-22 20:01:49 | INFO  | Task d1070b7d-00f3-4007-b935-daeb69c70ff8 is in state STARTED 2025-06-22 20:01:49.571742 | orchestrator | 2025-06-22 20:01:49 | INFO  | Task c203ef30-6952-492e-b47a-51e32726175b is in state STARTED 2025-06-22 20:01:49.572790 | orchestrator | 2025-06-22 20:01:49 | INFO  | Task 38d54694-95db-4844-bb72-8967e27ceda1 is in state STARTED 2025-06-22 20:01:49.573842 | orchestrator | 2025-06-22 20:01:49 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:01:52.611471 | orchestrator | 2025-06-22 20:01:52 | INFO  | Task d1070b7d-00f3-4007-b935-daeb69c70ff8 is in state STARTED 2025-06-22 20:01:52.617922 | orchestrator | 2025-06-22 20:01:52 | INFO  | Task c203ef30-6952-492e-b47a-51e32726175b is in state STARTED 2025-06-22 20:01:52.620365 | orchestrator | 2025-06-22 20:01:52 | INFO  | Task 38d54694-95db-4844-bb72-8967e27ceda1 is in state STARTED 2025-06-22 20:01:52.620412 | orchestrator | 2025-06-22 20:01:52 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:01:55.673845 | orchestrator | 2025-06-22 20:01:55 | INFO  | Task d1070b7d-00f3-4007-b935-daeb69c70ff8 is in state STARTED 2025-06-22 20:01:55.677217 | orchestrator | 2025-06-22 20:01:55 | INFO  | Task c203ef30-6952-492e-b47a-51e32726175b is in state SUCCESS 2025-06-22 20:01:55.679194 | orchestrator | 2025-06-22 20:01:55.679241 | orchestrator | 2025-06-22 20:01:55.679296 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 20:01:55.679310 | orchestrator | 2025-06-22 20:01:55.679322 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 20:01:55.679334 | orchestrator | Sunday 22 June 2025 19:59:10 +0000 (0:00:00.252) 0:00:00.252 *********** 2025-06-22 20:01:55.679346 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:01:55.679359 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:01:55.679370 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:01:55.679381 | orchestrator | 2025-06-22 20:01:55.679393 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 20:01:55.679409 | orchestrator | Sunday 22 June 2025 19:59:10 +0000 (0:00:00.310) 0:00:00.562 *********** 2025-06-22 20:01:55.679421 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-06-22 20:01:55.679433 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-06-22 20:01:55.679444 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-06-22 20:01:55.679455 | orchestrator | 2025-06-22 20:01:55.679466 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-06-22 20:01:55.679477 | orchestrator | 2025-06-22 20:01:55.679489 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-06-22 20:01:55.679500 | orchestrator | Sunday 22 June 2025 19:59:10 +0000 (0:00:00.405) 0:00:00.967 *********** 2025-06-22 20:01:55.679511 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:01:55.679523 | orchestrator | 2025-06-22 20:01:55.679534 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-06-22 20:01:55.679545 | orchestrator | Sunday 22 June 2025 19:59:11 +0000 (0:00:00.515) 0:00:01.483 *********** 2025-06-22 20:01:55.679557 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-22 20:01:55.679568 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-22 20:01:55.679579 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-06-22 20:01:55.679590 | orchestrator | 2025-06-22 20:01:55.679601 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-06-22 20:01:55.679613 | orchestrator | Sunday 22 June 2025 19:59:11 +0000 (0:00:00.698) 0:00:02.182 *********** 2025-06-22 20:01:55.679646 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-22 20:01:55.679686 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-22 20:01:55.679714 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-22 20:01:55.679730 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-22 20:01:55.679751 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-22 20:01:55.679776 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-22 20:01:55.679790 | orchestrator | 2025-06-22 20:01:55.679803 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-06-22 20:01:55.679816 | orchestrator | Sunday 22 June 2025 19:59:13 +0000 (0:00:01.713) 0:00:03.896 *********** 2025-06-22 20:01:55.679829 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:01:55.679841 | orchestrator | 2025-06-22 20:01:55.679854 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-06-22 20:01:55.679867 | orchestrator | Sunday 22 June 2025 19:59:14 +0000 (0:00:00.560) 0:00:04.456 *********** 2025-06-22 20:01:55.679889 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-22 20:01:55.679903 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-22 20:01:55.679922 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-22 20:01:55.679943 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-22 20:01:55.679965 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-22 20:01:55.679980 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-22 20:01:55.680000 | orchestrator | 2025-06-22 20:01:55.680013 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-06-22 20:01:55.680025 | orchestrator | Sunday 22 June 2025 19:59:17 +0000 (0:00:02.793) 0:00:07.249 *********** 2025-06-22 20:01:55.680044 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-22 20:01:55.680059 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-22 20:01:55.680073 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:55.680094 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-22 20:01:55.680107 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-22 20:01:55.680126 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:55.680233 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-22 20:01:55.680249 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-22 20:01:55.680261 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:55.680273 | orchestrator | 2025-06-22 20:01:55.680284 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-06-22 20:01:55.680295 | orchestrator | Sunday 22 June 2025 19:59:18 +0000 (0:00:01.575) 0:00:08.825 *********** 2025-06-22 20:01:55.680314 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-22 20:01:55.680327 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-22 20:01:55.680347 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:55.680364 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-22 20:01:55.680376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-22 20:01:55.680388 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:55.680406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-06-22 20:01:55.680419 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-06-22 20:01:55.680437 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:55.680448 | orchestrator | 2025-06-22 20:01:55.680459 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-06-22 20:01:55.680471 | orchestrator | Sunday 22 June 2025 19:59:19 +0000 (0:00:00.839) 0:00:09.665 *********** 2025-06-22 20:01:55.680487 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-22 20:01:55.680500 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-22 20:01:55.680512 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-22 20:01:55.680531 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-22 20:01:55.680562 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-22 20:01:55.680580 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-22 20:01:55.680592 | orchestrator | 2025-06-22 20:01:55.680604 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-06-22 20:01:55.680615 | orchestrator | Sunday 22 June 2025 19:59:21 +0000 (0:00:02.432) 0:00:12.097 *********** 2025-06-22 20:01:55.680626 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:01:55.680638 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:01:55.680683 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:01:55.680695 | orchestrator | 2025-06-22 20:01:55.680706 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-06-22 20:01:55.680717 | orchestrator | Sunday 22 June 2025 19:59:25 +0000 (0:00:04.098) 0:00:16.195 *********** 2025-06-22 20:01:55.680728 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:01:55.680739 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:01:55.680750 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:01:55.680761 | orchestrator | 2025-06-22 20:01:55.680772 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-06-22 20:01:55.680783 | orchestrator | Sunday 22 June 2025 19:59:27 +0000 (0:00:01.845) 0:00:18.041 *********** 2025-06-22 20:01:55.680805 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-22 20:01:55.680825 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-22 20:01:55.680843 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250530', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-06-22 20:01:55.680888 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-22 20:01:55.680910 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-22 20:01:55.680992 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-06-22 20:01:55.681005 | orchestrator | 2025-06-22 20:01:55.681016 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-06-22 20:01:55.681028 | orchestrator | Sunday 22 June 2025 19:59:30 +0000 (0:00:02.522) 0:00:20.563 *********** 2025-06-22 20:01:55.681039 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:55.681050 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:01:55.681061 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:01:55.681072 | orchestrator | 2025-06-22 20:01:55.681083 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-06-22 20:01:55.681094 | orchestrator | Sunday 22 June 2025 19:59:30 +0000 (0:00:00.351) 0:00:20.914 *********** 2025-06-22 20:01:55.681105 | orchestrator | 2025-06-22 20:01:55.681116 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-06-22 20:01:55.681127 | orchestrator | Sunday 22 June 2025 19:59:30 +0000 (0:00:00.067) 0:00:20.981 *********** 2025-06-22 20:01:55.681157 | orchestrator | 2025-06-22 20:01:55.681168 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-06-22 20:01:55.681179 | orchestrator | Sunday 22 June 2025 19:59:30 +0000 (0:00:00.065) 0:00:21.047 *********** 2025-06-22 20:01:55.681190 | orchestrator | 2025-06-22 20:01:55.681201 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-06-22 20:01:55.681218 | orchestrator | Sunday 22 June 2025 19:59:31 +0000 (0:00:00.351) 0:00:21.399 *********** 2025-06-22 20:01:55.681229 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:55.681240 | orchestrator | 2025-06-22 20:01:55.681251 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-06-22 20:01:55.681263 | orchestrator | Sunday 22 June 2025 19:59:31 +0000 (0:00:00.230) 0:00:21.629 *********** 2025-06-22 20:01:55.681273 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:01:55.681284 | orchestrator | 2025-06-22 20:01:55.681296 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-06-22 20:01:55.681307 | orchestrator | Sunday 22 June 2025 19:59:31 +0000 (0:00:00.209) 0:00:21.839 *********** 2025-06-22 20:01:55.681318 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:01:55.681329 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:01:55.681340 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:01:55.681350 | orchestrator | 2025-06-22 20:01:55.681361 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-06-22 20:01:55.681373 | orchestrator | Sunday 22 June 2025 20:00:27 +0000 (0:00:56.336) 0:01:18.176 *********** 2025-06-22 20:01:55.681383 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:01:55.681402 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:01:55.681413 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:01:55.681423 | orchestrator | 2025-06-22 20:01:55.681434 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-06-22 20:01:55.681445 | orchestrator | Sunday 22 June 2025 20:01:43 +0000 (0:01:15.916) 0:02:34.092 *********** 2025-06-22 20:01:55.681456 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:01:55.681467 | orchestrator | 2025-06-22 20:01:55.681479 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-06-22 20:01:55.681490 | orchestrator | Sunday 22 June 2025 20:01:44 +0000 (0:00:00.679) 0:02:34.772 *********** 2025-06-22 20:01:55.681501 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:01:55.681511 | orchestrator | 2025-06-22 20:01:55.681523 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-06-22 20:01:55.681534 | orchestrator | Sunday 22 June 2025 20:01:46 +0000 (0:00:02.397) 0:02:37.169 *********** 2025-06-22 20:01:55.681545 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:01:55.681556 | orchestrator | 2025-06-22 20:01:55.681567 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-06-22 20:01:55.681578 | orchestrator | Sunday 22 June 2025 20:01:49 +0000 (0:00:02.269) 0:02:39.438 *********** 2025-06-22 20:01:55.681588 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:01:55.681599 | orchestrator | 2025-06-22 20:01:55.681610 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-06-22 20:01:55.681621 | orchestrator | Sunday 22 June 2025 20:01:51 +0000 (0:00:02.599) 0:02:42.037 *********** 2025-06-22 20:01:55.681633 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:01:55.681643 | orchestrator | 2025-06-22 20:01:55.681661 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 20:01:55.681673 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-22 20:01:55.681686 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-22 20:01:55.681698 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-22 20:01:55.681708 | orchestrator | 2025-06-22 20:01:55.681719 | orchestrator | 2025-06-22 20:01:55.681730 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 20:01:55.681742 | orchestrator | Sunday 22 June 2025 20:01:54 +0000 (0:00:02.550) 0:02:44.588 *********** 2025-06-22 20:01:55.681752 | orchestrator | =============================================================================== 2025-06-22 20:01:55.681763 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 75.92s 2025-06-22 20:01:55.681774 | orchestrator | opensearch : Restart opensearch container ------------------------------ 56.34s 2025-06-22 20:01:55.681785 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 4.10s 2025-06-22 20:01:55.681796 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.79s 2025-06-22 20:01:55.681807 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.60s 2025-06-22 20:01:55.681818 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.55s 2025-06-22 20:01:55.681829 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.52s 2025-06-22 20:01:55.681839 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.43s 2025-06-22 20:01:55.681850 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.40s 2025-06-22 20:01:55.681861 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.27s 2025-06-22 20:01:55.681872 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.85s 2025-06-22 20:01:55.681890 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.71s 2025-06-22 20:01:55.681901 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.58s 2025-06-22 20:01:55.681912 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 0.84s 2025-06-22 20:01:55.681923 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.70s 2025-06-22 20:01:55.681933 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.68s 2025-06-22 20:01:55.681949 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.56s 2025-06-22 20:01:55.681960 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.52s 2025-06-22 20:01:55.681971 | orchestrator | opensearch : Flush handlers --------------------------------------------- 0.49s 2025-06-22 20:01:55.681982 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.41s 2025-06-22 20:01:55.681992 | orchestrator | 2025-06-22 20:01:55 | INFO  | Task 38d54694-95db-4844-bb72-8967e27ceda1 is in state STARTED 2025-06-22 20:01:55.682004 | orchestrator | 2025-06-22 20:01:55 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:01:58.737452 | orchestrator | 2025-06-22 20:01:58 | INFO  | Task d1070b7d-00f3-4007-b935-daeb69c70ff8 is in state STARTED 2025-06-22 20:01:58.738529 | orchestrator | 2025-06-22 20:01:58 | INFO  | Task 38d54694-95db-4844-bb72-8967e27ceda1 is in state STARTED 2025-06-22 20:01:58.738616 | orchestrator | 2025-06-22 20:01:58 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:02:01.797978 | orchestrator | 2025-06-22 20:02:01 | INFO  | Task d1070b7d-00f3-4007-b935-daeb69c70ff8 is in state STARTED 2025-06-22 20:02:01.800023 | orchestrator | 2025-06-22 20:02:01 | INFO  | Task 38d54694-95db-4844-bb72-8967e27ceda1 is in state STARTED 2025-06-22 20:02:01.800220 | orchestrator | 2025-06-22 20:02:01 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:02:04.843414 | orchestrator | 2025-06-22 20:02:04 | INFO  | Task d1070b7d-00f3-4007-b935-daeb69c70ff8 is in state STARTED 2025-06-22 20:02:04.845609 | orchestrator | 2025-06-22 20:02:04 | INFO  | Task 38d54694-95db-4844-bb72-8967e27ceda1 is in state STARTED 2025-06-22 20:02:04.845639 | orchestrator | 2025-06-22 20:02:04 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:02:07.890738 | orchestrator | 2025-06-22 20:02:07 | INFO  | Task d1070b7d-00f3-4007-b935-daeb69c70ff8 is in state STARTED 2025-06-22 20:02:07.892471 | orchestrator | 2025-06-22 20:02:07 | INFO  | Task 38d54694-95db-4844-bb72-8967e27ceda1 is in state STARTED 2025-06-22 20:02:07.892594 | orchestrator | 2025-06-22 20:02:07 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:02:10.944354 | orchestrator | 2025-06-22 20:02:10 | INFO  | Task d1070b7d-00f3-4007-b935-daeb69c70ff8 is in state STARTED 2025-06-22 20:02:10.945668 | orchestrator | 2025-06-22 20:02:10 | INFO  | Task 38d54694-95db-4844-bb72-8967e27ceda1 is in state STARTED 2025-06-22 20:02:10.945958 | orchestrator | 2025-06-22 20:02:10 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:02:13.987974 | orchestrator | 2025-06-22 20:02:13 | INFO  | Task d1070b7d-00f3-4007-b935-daeb69c70ff8 is in state STARTED 2025-06-22 20:02:13.989325 | orchestrator | 2025-06-22 20:02:13 | INFO  | Task 38d54694-95db-4844-bb72-8967e27ceda1 is in state STARTED 2025-06-22 20:02:13.989439 | orchestrator | 2025-06-22 20:02:13 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:02:17.040501 | orchestrator | 2025-06-22 20:02:17 | INFO  | Task d1070b7d-00f3-4007-b935-daeb69c70ff8 is in state STARTED 2025-06-22 20:02:17.041884 | orchestrator | 2025-06-22 20:02:17 | INFO  | Task 38d54694-95db-4844-bb72-8967e27ceda1 is in state STARTED 2025-06-22 20:02:17.041945 | orchestrator | 2025-06-22 20:02:17 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:02:20.089636 | orchestrator | 2025-06-22 20:02:20 | INFO  | Task d1070b7d-00f3-4007-b935-daeb69c70ff8 is in state STARTED 2025-06-22 20:02:20.091627 | orchestrator | 2025-06-22 20:02:20 | INFO  | Task 38d54694-95db-4844-bb72-8967e27ceda1 is in state STARTED 2025-06-22 20:02:20.091662 | orchestrator | 2025-06-22 20:02:20 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:02:23.140641 | orchestrator | 2025-06-22 20:02:23 | INFO  | Task d1070b7d-00f3-4007-b935-daeb69c70ff8 is in state STARTED 2025-06-22 20:02:23.142980 | orchestrator | 2025-06-22 20:02:23 | INFO  | Task 38d54694-95db-4844-bb72-8967e27ceda1 is in state STARTED 2025-06-22 20:02:23.143060 | orchestrator | 2025-06-22 20:02:23 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:02:26.204460 | orchestrator | 2025-06-22 20:02:26 | INFO  | Task d1070b7d-00f3-4007-b935-daeb69c70ff8 is in state STARTED 2025-06-22 20:02:26.206180 | orchestrator | 2025-06-22 20:02:26 | INFO  | Task 38d54694-95db-4844-bb72-8967e27ceda1 is in state SUCCESS 2025-06-22 20:02:26.209263 | orchestrator | 2025-06-22 20:02:26.209333 | orchestrator | 2025-06-22 20:02:26.209356 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-06-22 20:02:26.209373 | orchestrator | 2025-06-22 20:02:26.209418 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-06-22 20:02:26.209442 | orchestrator | Sunday 22 June 2025 19:59:10 +0000 (0:00:00.108) 0:00:00.108 *********** 2025-06-22 20:02:26.209454 | orchestrator | ok: [localhost] => { 2025-06-22 20:02:26.209482 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-06-22 20:02:26.209493 | orchestrator | } 2025-06-22 20:02:26.209506 | orchestrator | 2025-06-22 20:02:26.209517 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-06-22 20:02:26.209528 | orchestrator | Sunday 22 June 2025 19:59:10 +0000 (0:00:00.064) 0:00:00.172 *********** 2025-06-22 20:02:26.209539 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-06-22 20:02:26.209552 | orchestrator | ...ignoring 2025-06-22 20:02:26.209564 | orchestrator | 2025-06-22 20:02:26.209575 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-06-22 20:02:26.209586 | orchestrator | Sunday 22 June 2025 19:59:12 +0000 (0:00:02.836) 0:00:03.008 *********** 2025-06-22 20:02:26.209597 | orchestrator | skipping: [localhost] 2025-06-22 20:02:26.209608 | orchestrator | 2025-06-22 20:02:26.209619 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-06-22 20:02:26.209633 | orchestrator | Sunday 22 June 2025 19:59:13 +0000 (0:00:00.059) 0:00:03.067 *********** 2025-06-22 20:02:26.209653 | orchestrator | ok: [localhost] 2025-06-22 20:02:26.209671 | orchestrator | 2025-06-22 20:02:26.209692 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 20:02:26.209711 | orchestrator | 2025-06-22 20:02:26.209731 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 20:02:26.209750 | orchestrator | Sunday 22 June 2025 19:59:13 +0000 (0:00:00.151) 0:00:03.219 *********** 2025-06-22 20:02:26.209771 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:26.209793 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:02:26.209815 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:02:26.209828 | orchestrator | 2025-06-22 20:02:26.209846 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 20:02:26.209865 | orchestrator | Sunday 22 June 2025 19:59:13 +0000 (0:00:00.301) 0:00:03.520 *********** 2025-06-22 20:02:26.209884 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-06-22 20:02:26.209903 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-06-22 20:02:26.209922 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-06-22 20:02:26.209973 | orchestrator | 2025-06-22 20:02:26.209986 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-06-22 20:02:26.209997 | orchestrator | 2025-06-22 20:02:26.210009 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-06-22 20:02:26.210083 | orchestrator | Sunday 22 June 2025 19:59:14 +0000 (0:00:00.751) 0:00:04.271 *********** 2025-06-22 20:02:26.210096 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-22 20:02:26.210107 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-06-22 20:02:26.210117 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-06-22 20:02:26.210155 | orchestrator | 2025-06-22 20:02:26.210175 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-06-22 20:02:26.210187 | orchestrator | Sunday 22 June 2025 19:59:14 +0000 (0:00:00.422) 0:00:04.694 *********** 2025-06-22 20:02:26.210198 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:02:26.210210 | orchestrator | 2025-06-22 20:02:26.210221 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-06-22 20:02:26.210232 | orchestrator | Sunday 22 June 2025 19:59:15 +0000 (0:00:00.626) 0:00:05.320 *********** 2025-06-22 20:02:26.210278 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-22 20:02:26.210296 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-22 20:02:26.210319 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-22 20:02:26.210331 | orchestrator | 2025-06-22 20:02:26.210350 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-06-22 20:02:26.210362 | orchestrator | Sunday 22 June 2025 19:59:18 +0000 (0:00:03.604) 0:00:08.925 *********** 2025-06-22 20:02:26.210373 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.210384 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:02:26.210395 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.210406 | orchestrator | 2025-06-22 20:02:26.210422 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-06-22 20:02:26.210433 | orchestrator | Sunday 22 June 2025 19:59:19 +0000 (0:00:00.690) 0:00:09.615 *********** 2025-06-22 20:02:26.210444 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.210455 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.210466 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:02:26.210476 | orchestrator | 2025-06-22 20:02:26.210487 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-06-22 20:02:26.210498 | orchestrator | Sunday 22 June 2025 19:59:21 +0000 (0:00:01.514) 0:00:11.130 *********** 2025-06-22 20:02:26.210511 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-22 20:02:26.210537 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-22 20:02:26.210555 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-22 20:02:26.210574 | orchestrator | 2025-06-22 20:02:26.210585 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-06-22 20:02:26.210597 | orchestrator | Sunday 22 June 2025 19:59:26 +0000 (0:00:05.268) 0:00:16.399 *********** 2025-06-22 20:02:26.210607 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.210618 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.210629 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:02:26.210640 | orchestrator | 2025-06-22 20:02:26.210650 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-06-22 20:02:26.210661 | orchestrator | Sunday 22 June 2025 19:59:27 +0000 (0:00:01.342) 0:00:17.741 *********** 2025-06-22 20:02:26.210672 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:02:26.210683 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:02:26.210694 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:02:26.210705 | orchestrator | 2025-06-22 20:02:26.210716 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-06-22 20:02:26.210727 | orchestrator | Sunday 22 June 2025 19:59:32 +0000 (0:00:04.790) 0:00:22.532 *********** 2025-06-22 20:02:26.210738 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:02:26.210749 | orchestrator | 2025-06-22 20:02:26.210760 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-06-22 20:02:26.210770 | orchestrator | Sunday 22 June 2025 19:59:33 +0000 (0:00:00.725) 0:00:23.257 *********** 2025-06-22 20:02:26.210795 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-22 20:02:26.210822 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.210834 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-22 20:02:26.210846 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.210870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-22 20:02:26.210888 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.210900 | orchestrator | 2025-06-22 20:02:26.210911 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-06-22 20:02:26.210922 | orchestrator | Sunday 22 June 2025 19:59:36 +0000 (0:00:03.310) 0:00:26.568 *********** 2025-06-22 20:02:26.210933 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-22 20:02:26.210945 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.210966 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-22 20:02:26.211002 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.211023 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-22 20:02:26.211043 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.211062 | orchestrator | 2025-06-22 20:02:26.211081 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-06-22 20:02:26.211101 | orchestrator | Sunday 22 June 2025 19:59:39 +0000 (0:00:02.937) 0:00:29.506 *********** 2025-06-22 20:02:26.211154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-22 20:02:26.211191 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.211220 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-22 20:02:26.211241 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.211263 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-06-22 20:02:26.211295 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.211311 | orchestrator | 2025-06-22 20:02:26.211323 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-06-22 20:02:26.211334 | orchestrator | Sunday 22 June 2025 19:59:42 +0000 (0:00:02.689) 0:00:32.195 *********** 2025-06-22 20:02:26.211361 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-22 20:02:26.211376 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-22 20:02:26.211411 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-06-22 20:02:26.211425 | orchestrator | 2025-06-22 20:02:26.211437 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-06-22 20:02:26.211448 | orchestrator | Sunday 22 June 2025 19:59:45 +0000 (0:00:03.398) 0:00:35.594 *********** 2025-06-22 20:02:26.211459 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:02:26.211470 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:02:26.211481 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:02:26.211492 | orchestrator | 2025-06-22 20:02:26.211503 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-06-22 20:02:26.211514 | orchestrator | Sunday 22 June 2025 19:59:46 +0000 (0:00:01.136) 0:00:36.731 *********** 2025-06-22 20:02:26.211526 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:26.211536 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:02:26.211547 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:02:26.211559 | orchestrator | 2025-06-22 20:02:26.211570 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-06-22 20:02:26.211639 | orchestrator | Sunday 22 June 2025 19:59:47 +0000 (0:00:00.469) 0:00:37.201 *********** 2025-06-22 20:02:26.211653 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:26.211681 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:02:26.211693 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:02:26.211704 | orchestrator | 2025-06-22 20:02:26.211715 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-06-22 20:02:26.211726 | orchestrator | Sunday 22 June 2025 19:59:47 +0000 (0:00:00.383) 0:00:37.585 *********** 2025-06-22 20:02:26.211739 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-06-22 20:02:26.211751 | orchestrator | ...ignoring 2025-06-22 20:02:26.211762 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-06-22 20:02:26.211781 | orchestrator | ...ignoring 2025-06-22 20:02:26.211793 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-06-22 20:02:26.211803 | orchestrator | ...ignoring 2025-06-22 20:02:26.211814 | orchestrator | 2025-06-22 20:02:26.211825 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-06-22 20:02:26.211836 | orchestrator | Sunday 22 June 2025 19:59:58 +0000 (0:00:10.885) 0:00:48.470 *********** 2025-06-22 20:02:26.211847 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:26.211858 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:02:26.211869 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:02:26.211880 | orchestrator | 2025-06-22 20:02:26.211891 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-06-22 20:02:26.211902 | orchestrator | Sunday 22 June 2025 19:59:58 +0000 (0:00:00.547) 0:00:49.017 *********** 2025-06-22 20:02:26.211913 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.211924 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.211935 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.211946 | orchestrator | 2025-06-22 20:02:26.211957 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-06-22 20:02:26.211968 | orchestrator | Sunday 22 June 2025 19:59:59 +0000 (0:00:00.380) 0:00:49.397 *********** 2025-06-22 20:02:26.211979 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.211990 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.212000 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.212011 | orchestrator | 2025-06-22 20:02:26.212022 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-06-22 20:02:26.212033 | orchestrator | Sunday 22 June 2025 19:59:59 +0000 (0:00:00.390) 0:00:49.788 *********** 2025-06-22 20:02:26.212044 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.212055 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.212067 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.212078 | orchestrator | 2025-06-22 20:02:26.212089 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-06-22 20:02:26.212107 | orchestrator | Sunday 22 June 2025 20:00:00 +0000 (0:00:00.385) 0:00:50.174 *********** 2025-06-22 20:02:26.212119 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:26.212151 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:02:26.212163 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:02:26.212174 | orchestrator | 2025-06-22 20:02:26.212185 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-06-22 20:02:26.212196 | orchestrator | Sunday 22 June 2025 20:00:00 +0000 (0:00:00.530) 0:00:50.705 *********** 2025-06-22 20:02:26.212213 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.212224 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.212235 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.212246 | orchestrator | 2025-06-22 20:02:26.212257 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-06-22 20:02:26.212268 | orchestrator | Sunday 22 June 2025 20:00:01 +0000 (0:00:00.371) 0:00:51.077 *********** 2025-06-22 20:02:26.212278 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.212289 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.212300 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-06-22 20:02:26.212311 | orchestrator | 2025-06-22 20:02:26.212322 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-06-22 20:02:26.212333 | orchestrator | Sunday 22 June 2025 20:00:01 +0000 (0:00:00.344) 0:00:51.421 *********** 2025-06-22 20:02:26.212344 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:02:26.212354 | orchestrator | 2025-06-22 20:02:26.212365 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-06-22 20:02:26.212376 | orchestrator | Sunday 22 June 2025 20:00:11 +0000 (0:00:10.032) 0:01:01.454 *********** 2025-06-22 20:02:26.212387 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:26.212405 | orchestrator | 2025-06-22 20:02:26.212416 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-06-22 20:02:26.212427 | orchestrator | Sunday 22 June 2025 20:00:11 +0000 (0:00:00.138) 0:01:01.592 *********** 2025-06-22 20:02:26.212438 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.212448 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.212459 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.212470 | orchestrator | 2025-06-22 20:02:26.212481 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-06-22 20:02:26.212492 | orchestrator | Sunday 22 June 2025 20:00:12 +0000 (0:00:00.872) 0:01:02.464 *********** 2025-06-22 20:02:26.212503 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:02:26.212514 | orchestrator | 2025-06-22 20:02:26.212525 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-06-22 20:02:26.212536 | orchestrator | Sunday 22 June 2025 20:00:20 +0000 (0:00:07.900) 0:01:10.365 *********** 2025-06-22 20:02:26.212547 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:26.212558 | orchestrator | 2025-06-22 20:02:26.212568 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-06-22 20:02:26.212580 | orchestrator | Sunday 22 June 2025 20:00:21 +0000 (0:00:01.582) 0:01:11.947 *********** 2025-06-22 20:02:26.212591 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:26.212602 | orchestrator | 2025-06-22 20:02:26.212613 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-06-22 20:02:26.212624 | orchestrator | Sunday 22 June 2025 20:00:24 +0000 (0:00:02.322) 0:01:14.270 *********** 2025-06-22 20:02:26.212635 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:02:26.212645 | orchestrator | 2025-06-22 20:02:26.212656 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-06-22 20:02:26.212667 | orchestrator | Sunday 22 June 2025 20:00:24 +0000 (0:00:00.147) 0:01:14.417 *********** 2025-06-22 20:02:26.212678 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.212689 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.212700 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.212711 | orchestrator | 2025-06-22 20:02:26.212722 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-06-22 20:02:26.212733 | orchestrator | Sunday 22 June 2025 20:00:24 +0000 (0:00:00.567) 0:01:14.985 *********** 2025-06-22 20:02:26.212744 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.212755 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-06-22 20:02:26.212765 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:02:26.212776 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:02:26.212787 | orchestrator | 2025-06-22 20:02:26.212798 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-06-22 20:02:26.212809 | orchestrator | skipping: no hosts matched 2025-06-22 20:02:26.212820 | orchestrator | 2025-06-22 20:02:26.212830 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-06-22 20:02:26.212842 | orchestrator | 2025-06-22 20:02:26.212852 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-06-22 20:02:26.212863 | orchestrator | Sunday 22 June 2025 20:00:25 +0000 (0:00:00.334) 0:01:15.319 *********** 2025-06-22 20:02:26.212874 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:02:26.212885 | orchestrator | 2025-06-22 20:02:26.212896 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-06-22 20:02:26.212907 | orchestrator | Sunday 22 June 2025 20:00:46 +0000 (0:00:21.560) 0:01:36.880 *********** 2025-06-22 20:02:26.212918 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:02:26.212928 | orchestrator | 2025-06-22 20:02:26.212940 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-06-22 20:02:26.212951 | orchestrator | Sunday 22 June 2025 20:01:07 +0000 (0:00:20.605) 0:01:57.485 *********** 2025-06-22 20:02:26.212962 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:02:26.212972 | orchestrator | 2025-06-22 20:02:26.212984 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-06-22 20:02:26.213001 | orchestrator | 2025-06-22 20:02:26.213012 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-06-22 20:02:26.213023 | orchestrator | Sunday 22 June 2025 20:01:09 +0000 (0:00:02.499) 0:01:59.984 *********** 2025-06-22 20:02:26.213033 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:02:26.213044 | orchestrator | 2025-06-22 20:02:26.213055 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-06-22 20:02:26.213073 | orchestrator | Sunday 22 June 2025 20:01:28 +0000 (0:00:18.931) 0:02:18.916 *********** 2025-06-22 20:02:26.213084 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:02:26.213095 | orchestrator | 2025-06-22 20:02:26.213106 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-06-22 20:02:26.213117 | orchestrator | Sunday 22 June 2025 20:01:49 +0000 (0:00:20.622) 0:02:39.538 *********** 2025-06-22 20:02:26.213153 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:02:26.213166 | orchestrator | 2025-06-22 20:02:26.213177 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-06-22 20:02:26.213188 | orchestrator | 2025-06-22 20:02:26.213208 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-06-22 20:02:26.213219 | orchestrator | Sunday 22 June 2025 20:01:52 +0000 (0:00:02.762) 0:02:42.300 *********** 2025-06-22 20:02:26.213230 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:02:26.213241 | orchestrator | 2025-06-22 20:02:26.213252 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-06-22 20:02:26.213263 | orchestrator | Sunday 22 June 2025 20:02:04 +0000 (0:00:11.842) 0:02:54.143 *********** 2025-06-22 20:02:26.213274 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:26.213285 | orchestrator | 2025-06-22 20:02:26.213296 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-06-22 20:02:26.213307 | orchestrator | Sunday 22 June 2025 20:02:08 +0000 (0:00:04.566) 0:02:58.709 *********** 2025-06-22 20:02:26.213318 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:26.213329 | orchestrator | 2025-06-22 20:02:26.213340 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-06-22 20:02:26.213351 | orchestrator | 2025-06-22 20:02:26.213362 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-06-22 20:02:26.213373 | orchestrator | Sunday 22 June 2025 20:02:11 +0000 (0:00:02.401) 0:03:01.110 *********** 2025-06-22 20:02:26.213384 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:02:26.213394 | orchestrator | 2025-06-22 20:02:26.213405 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-06-22 20:02:26.213416 | orchestrator | Sunday 22 June 2025 20:02:11 +0000 (0:00:00.522) 0:03:01.633 *********** 2025-06-22 20:02:26.213427 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.213438 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.213449 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:02:26.213460 | orchestrator | 2025-06-22 20:02:26.213471 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-06-22 20:02:26.213482 | orchestrator | Sunday 22 June 2025 20:02:13 +0000 (0:00:02.355) 0:03:03.988 *********** 2025-06-22 20:02:26.213493 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.213504 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.213515 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:02:26.213525 | orchestrator | 2025-06-22 20:02:26.213537 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-06-22 20:02:26.213547 | orchestrator | Sunday 22 June 2025 20:02:16 +0000 (0:00:02.121) 0:03:06.110 *********** 2025-06-22 20:02:26.213558 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.213569 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.213580 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:02:26.213591 | orchestrator | 2025-06-22 20:02:26.213602 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-06-22 20:02:26.213613 | orchestrator | Sunday 22 June 2025 20:02:18 +0000 (0:00:02.090) 0:03:08.201 *********** 2025-06-22 20:02:26.213630 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.213641 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.213652 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:02:26.213663 | orchestrator | 2025-06-22 20:02:26.213674 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-06-22 20:02:26.213685 | orchestrator | Sunday 22 June 2025 20:02:20 +0000 (0:00:02.120) 0:03:10.321 *********** 2025-06-22 20:02:26.213695 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:02:26.213706 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:02:26.213717 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:02:26.213728 | orchestrator | 2025-06-22 20:02:26.213739 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-06-22 20:02:26.213750 | orchestrator | Sunday 22 June 2025 20:02:23 +0000 (0:00:02.851) 0:03:13.173 *********** 2025-06-22 20:02:26.213761 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:02:26.213772 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:02:26.213783 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:02:26.213794 | orchestrator | 2025-06-22 20:02:26.213805 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 20:02:26.213816 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-06-22 20:02:26.213827 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2025-06-22 20:02:26.213840 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-06-22 20:02:26.213851 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-06-22 20:02:26.213862 | orchestrator | 2025-06-22 20:02:26.213873 | orchestrator | 2025-06-22 20:02:26.213884 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 20:02:26.213895 | orchestrator | Sunday 22 June 2025 20:02:23 +0000 (0:00:00.252) 0:03:13.425 *********** 2025-06-22 20:02:26.213906 | orchestrator | =============================================================================== 2025-06-22 20:02:26.213917 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 41.23s 2025-06-22 20:02:26.213928 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 40.49s 2025-06-22 20:02:26.213945 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 11.84s 2025-06-22 20:02:26.213956 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.89s 2025-06-22 20:02:26.213967 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.03s 2025-06-22 20:02:26.213978 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.90s 2025-06-22 20:02:26.213989 | orchestrator | mariadb : Copying over config.json files for services ------------------- 5.27s 2025-06-22 20:02:26.214005 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.26s 2025-06-22 20:02:26.214046 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.79s 2025-06-22 20:02:26.214060 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.57s 2025-06-22 20:02:26.214071 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.60s 2025-06-22 20:02:26.214082 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.40s 2025-06-22 20:02:26.214093 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.31s 2025-06-22 20:02:26.214104 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.94s 2025-06-22 20:02:26.214115 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.85s 2025-06-22 20:02:26.214151 | orchestrator | Check MariaDB service --------------------------------------------------- 2.84s 2025-06-22 20:02:26.214163 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.69s 2025-06-22 20:02:26.214174 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.40s 2025-06-22 20:02:26.214184 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.36s 2025-06-22 20:02:26.214195 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.32s 2025-06-22 20:02:26.214206 | orchestrator | 2025-06-22 20:02:26 | INFO  | Task 37e6b566-9d3f-4dc6-821f-05d6da068b54 is in state STARTED 2025-06-22 20:02:26.214326 | orchestrator | 2025-06-22 20:02:26 | INFO  | Task 35e7ac7a-81fa-417d-a203-78c2eb4c0a2b is in state STARTED 2025-06-22 20:02:26.214341 | orchestrator | 2025-06-22 20:02:26 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:02:29.255202 | orchestrator | 2025-06-22 20:02:29 | INFO  | Task d1070b7d-00f3-4007-b935-daeb69c70ff8 is in state STARTED 2025-06-22 20:02:29.256377 | orchestrator | 2025-06-22 20:02:29 | INFO  | Task 37e6b566-9d3f-4dc6-821f-05d6da068b54 is in state STARTED 2025-06-22 20:02:29.257210 | orchestrator | 2025-06-22 20:02:29 | INFO  | Task 35e7ac7a-81fa-417d-a203-78c2eb4c0a2b is in state STARTED 2025-06-22 20:02:29.257240 | orchestrator | 2025-06-22 20:02:29 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:02:32.302783 | orchestrator | 2025-06-22 20:02:32 | INFO  | Task d1070b7d-00f3-4007-b935-daeb69c70ff8 is in state STARTED 2025-06-22 20:02:32.305554 | orchestrator | 2025-06-22 20:02:32 | INFO  | Task 37e6b566-9d3f-4dc6-821f-05d6da068b54 is in state STARTED 2025-06-22 20:02:32.307066 | orchestrator | 2025-06-22 20:02:32 | INFO  | Task 35e7ac7a-81fa-417d-a203-78c2eb4c0a2b is in state STARTED 2025-06-22 20:02:32.307098 | orchestrator | 2025-06-22 20:02:32 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:02:35.347944 | orchestrator | 2025-06-22 20:02:35 | INFO  | Task d1070b7d-00f3-4007-b935-daeb69c70ff8 is in state STARTED 2025-06-22 20:02:35.349120 | orchestrator | 2025-06-22 20:02:35 | INFO  | Task 37e6b566-9d3f-4dc6-821f-05d6da068b54 is in state STARTED 2025-06-22 20:02:35.350473 | orchestrator | 2025-06-22 20:02:35 | INFO  | Task 35e7ac7a-81fa-417d-a203-78c2eb4c0a2b is in state STARTED 2025-06-22 20:02:35.350779 | orchestrator | 2025-06-22 20:02:35 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:02:38.389702 | orchestrator | 2025-06-22 20:02:38 | INFO  | Task d1070b7d-00f3-4007-b935-daeb69c70ff8 is in state STARTED 2025-06-22 20:02:38.392066 | orchestrator | 2025-06-22 20:02:38 | INFO  | Task 37e6b566-9d3f-4dc6-821f-05d6da068b54 is in state STARTED 2025-06-22 20:02:38.395206 | orchestrator | 2025-06-22 20:02:38 | INFO  | Task 35e7ac7a-81fa-417d-a203-78c2eb4c0a2b is in state STARTED 2025-06-22 20:02:38.395245 | orchestrator | 2025-06-22 20:02:38 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:02:41.438702 | orchestrator | 2025-06-22 20:02:41 | INFO  | Task d1070b7d-00f3-4007-b935-daeb69c70ff8 is in state STARTED 2025-06-22 20:02:41.442606 | orchestrator | 2025-06-22 20:02:41 | INFO  | Task 37e6b566-9d3f-4dc6-821f-05d6da068b54 is in state STARTED 2025-06-22 20:02:41.442648 | orchestrator | 2025-06-22 20:02:41 | INFO  | Task 35e7ac7a-81fa-417d-a203-78c2eb4c0a2b is in state STARTED 2025-06-22 20:02:41.442662 | orchestrator | 2025-06-22 20:02:41 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:02:44.472974 | orchestrator | 2025-06-22 20:02:44 | INFO  | Task d1070b7d-00f3-4007-b935-daeb69c70ff8 is in state STARTED 2025-06-22 20:02:44.473259 | orchestrator | 2025-06-22 20:02:44 | INFO  | Task 37e6b566-9d3f-4dc6-821f-05d6da068b54 is in state STARTED 2025-06-22 20:02:44.474423 | orchestrator | 2025-06-22 20:02:44 | INFO  | Task 35e7ac7a-81fa-417d-a203-78c2eb4c0a2b is in state STARTED 2025-06-22 20:02:44.474475 | orchestrator | 2025-06-22 20:02:44 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:02:47.519172 | orchestrator | 2025-06-22 20:02:47 | INFO  | Task d1070b7d-00f3-4007-b935-daeb69c70ff8 is in state STARTED 2025-06-22 20:02:47.521431 | orchestrator | 2025-06-22 20:02:47 | INFO  | Task 37e6b566-9d3f-4dc6-821f-05d6da068b54 is in state STARTED 2025-06-22 20:02:47.521498 | orchestrator | 2025-06-22 20:02:47 | INFO  | Task 35e7ac7a-81fa-417d-a203-78c2eb4c0a2b is in state STARTED 2025-06-22 20:02:47.521510 | orchestrator | 2025-06-22 20:02:47 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:02:50.570453 | orchestrator | 2025-06-22 20:02:50 | INFO  | Task d1070b7d-00f3-4007-b935-daeb69c70ff8 is in state STARTED 2025-06-22 20:02:50.574634 | orchestrator | 2025-06-22 20:02:50 | INFO  | Task 37e6b566-9d3f-4dc6-821f-05d6da068b54 is in state STARTED 2025-06-22 20:02:50.575080 | orchestrator | 2025-06-22 20:02:50 | INFO  | Task 35e7ac7a-81fa-417d-a203-78c2eb4c0a2b is in state STARTED 2025-06-22 20:02:50.575106 | orchestrator | 2025-06-22 20:02:50 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:02:53.619372 | orchestrator | 2025-06-22 20:02:53 | INFO  | Task d1070b7d-00f3-4007-b935-daeb69c70ff8 is in state STARTED 2025-06-22 20:02:53.621156 | orchestrator | 2025-06-22 20:02:53 | INFO  | Task 37e6b566-9d3f-4dc6-821f-05d6da068b54 is in state STARTED 2025-06-22 20:02:53.623033 | orchestrator | 2025-06-22 20:02:53 | INFO  | Task 35e7ac7a-81fa-417d-a203-78c2eb4c0a2b is in state STARTED 2025-06-22 20:02:53.623081 | orchestrator | 2025-06-22 20:02:53 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:02:56.670958 | orchestrator | 2025-06-22 20:02:56 | INFO  | Task d1070b7d-00f3-4007-b935-daeb69c70ff8 is in state STARTED 2025-06-22 20:02:56.672373 | orchestrator | 2025-06-22 20:02:56 | INFO  | Task 37e6b566-9d3f-4dc6-821f-05d6da068b54 is in state STARTED 2025-06-22 20:02:56.676500 | orchestrator | 2025-06-22 20:02:56 | INFO  | Task 35e7ac7a-81fa-417d-a203-78c2eb4c0a2b is in state STARTED 2025-06-22 20:02:56.676556 | orchestrator | 2025-06-22 20:02:56 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:02:59.727823 | orchestrator | 2025-06-22 20:02:59 | INFO  | Task d1070b7d-00f3-4007-b935-daeb69c70ff8 is in state STARTED 2025-06-22 20:02:59.727926 | orchestrator | 2025-06-22 20:02:59 | INFO  | Task 37e6b566-9d3f-4dc6-821f-05d6da068b54 is in state STARTED 2025-06-22 20:02:59.730493 | orchestrator | 2025-06-22 20:02:59 | INFO  | Task 35e7ac7a-81fa-417d-a203-78c2eb4c0a2b is in state STARTED 2025-06-22 20:02:59.730582 | orchestrator | 2025-06-22 20:02:59 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:03:02.775621 | orchestrator | 2025-06-22 20:03:02 | INFO  | Task d1070b7d-00f3-4007-b935-daeb69c70ff8 is in state STARTED 2025-06-22 20:03:02.777507 | orchestrator | 2025-06-22 20:03:02 | INFO  | Task 37e6b566-9d3f-4dc6-821f-05d6da068b54 is in state STARTED 2025-06-22 20:03:02.779437 | orchestrator | 2025-06-22 20:03:02 | INFO  | Task 35e7ac7a-81fa-417d-a203-78c2eb4c0a2b is in state STARTED 2025-06-22 20:03:02.779483 | orchestrator | 2025-06-22 20:03:02 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:03:05.829338 | orchestrator | 2025-06-22 20:03:05 | INFO  | Task d1070b7d-00f3-4007-b935-daeb69c70ff8 is in state STARTED 2025-06-22 20:03:05.832755 | orchestrator | 2025-06-22 20:03:05 | INFO  | Task 37e6b566-9d3f-4dc6-821f-05d6da068b54 is in state STARTED 2025-06-22 20:03:05.835198 | orchestrator | 2025-06-22 20:03:05 | INFO  | Task 35e7ac7a-81fa-417d-a203-78c2eb4c0a2b is in state STARTED 2025-06-22 20:03:05.835636 | orchestrator | 2025-06-22 20:03:05 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:03:08.882319 | orchestrator | 2025-06-22 20:03:08 | INFO  | Task d1070b7d-00f3-4007-b935-daeb69c70ff8 is in state STARTED 2025-06-22 20:03:08.884046 | orchestrator | 2025-06-22 20:03:08 | INFO  | Task 37e6b566-9d3f-4dc6-821f-05d6da068b54 is in state STARTED 2025-06-22 20:03:08.886061 | orchestrator | 2025-06-22 20:03:08 | INFO  | Task 35e7ac7a-81fa-417d-a203-78c2eb4c0a2b is in state STARTED 2025-06-22 20:03:08.886113 | orchestrator | 2025-06-22 20:03:08 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:03:11.927415 | orchestrator | 2025-06-22 20:03:11 | INFO  | Task d1070b7d-00f3-4007-b935-daeb69c70ff8 is in state STARTED 2025-06-22 20:03:11.929621 | orchestrator | 2025-06-22 20:03:11 | INFO  | Task 37e6b566-9d3f-4dc6-821f-05d6da068b54 is in state STARTED 2025-06-22 20:03:11.931512 | orchestrator | 2025-06-22 20:03:11 | INFO  | Task 35e7ac7a-81fa-417d-a203-78c2eb4c0a2b is in state STARTED 2025-06-22 20:03:11.931668 | orchestrator | 2025-06-22 20:03:11 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:03:14.969357 | orchestrator | 2025-06-22 20:03:14 | INFO  | Task d1070b7d-00f3-4007-b935-daeb69c70ff8 is in state STARTED 2025-06-22 20:03:14.970985 | orchestrator | 2025-06-22 20:03:14 | INFO  | Task 37e6b566-9d3f-4dc6-821f-05d6da068b54 is in state STARTED 2025-06-22 20:03:14.972813 | orchestrator | 2025-06-22 20:03:14 | INFO  | Task 35e7ac7a-81fa-417d-a203-78c2eb4c0a2b is in state STARTED 2025-06-22 20:03:14.972984 | orchestrator | 2025-06-22 20:03:14 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:03:18.016959 | orchestrator | 2025-06-22 20:03:18 | INFO  | Task d1070b7d-00f3-4007-b935-daeb69c70ff8 is in state STARTED 2025-06-22 20:03:18.017387 | orchestrator | 2025-06-22 20:03:18 | INFO  | Task 37e6b566-9d3f-4dc6-821f-05d6da068b54 is in state STARTED 2025-06-22 20:03:18.018491 | orchestrator | 2025-06-22 20:03:18 | INFO  | Task 35e7ac7a-81fa-417d-a203-78c2eb4c0a2b is in state STARTED 2025-06-22 20:03:18.018528 | orchestrator | 2025-06-22 20:03:18 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:03:21.056375 | orchestrator | 2025-06-22 20:03:21 | INFO  | Task d1070b7d-00f3-4007-b935-daeb69c70ff8 is in state STARTED 2025-06-22 20:03:21.058362 | orchestrator | 2025-06-22 20:03:21 | INFO  | Task 37e6b566-9d3f-4dc6-821f-05d6da068b54 is in state STARTED 2025-06-22 20:03:21.060356 | orchestrator | 2025-06-22 20:03:21 | INFO  | Task 35e7ac7a-81fa-417d-a203-78c2eb4c0a2b is in state STARTED 2025-06-22 20:03:21.060397 | orchestrator | 2025-06-22 20:03:21 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:03:24.101249 | orchestrator | 2025-06-22 20:03:24 | INFO  | Task d1070b7d-00f3-4007-b935-daeb69c70ff8 is in state STARTED 2025-06-22 20:03:24.103362 | orchestrator | 2025-06-22 20:03:24 | INFO  | Task 37e6b566-9d3f-4dc6-821f-05d6da068b54 is in state STARTED 2025-06-22 20:03:24.105759 | orchestrator | 2025-06-22 20:03:24 | INFO  | Task 35e7ac7a-81fa-417d-a203-78c2eb4c0a2b is in state STARTED 2025-06-22 20:03:24.105802 | orchestrator | 2025-06-22 20:03:24 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:03:27.147490 | orchestrator | 2025-06-22 20:03:27 | INFO  | Task d1070b7d-00f3-4007-b935-daeb69c70ff8 is in state STARTED 2025-06-22 20:03:27.148389 | orchestrator | 2025-06-22 20:03:27 | INFO  | Task 37e6b566-9d3f-4dc6-821f-05d6da068b54 is in state STARTED 2025-06-22 20:03:27.150573 | orchestrator | 2025-06-22 20:03:27 | INFO  | Task 35e7ac7a-81fa-417d-a203-78c2eb4c0a2b is in state STARTED 2025-06-22 20:03:27.150857 | orchestrator | 2025-06-22 20:03:27 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:03:30.192076 | orchestrator | 2025-06-22 20:03:30 | INFO  | Task d1070b7d-00f3-4007-b935-daeb69c70ff8 is in state STARTED 2025-06-22 20:03:30.194244 | orchestrator | 2025-06-22 20:03:30 | INFO  | Task 37e6b566-9d3f-4dc6-821f-05d6da068b54 is in state STARTED 2025-06-22 20:03:30.195915 | orchestrator | 2025-06-22 20:03:30 | INFO  | Task 35e7ac7a-81fa-417d-a203-78c2eb4c0a2b is in state STARTED 2025-06-22 20:03:30.196235 | orchestrator | 2025-06-22 20:03:30 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:03:33.236347 | orchestrator | 2025-06-22 20:03:33 | INFO  | Task d1070b7d-00f3-4007-b935-daeb69c70ff8 is in state STARTED 2025-06-22 20:03:33.238887 | orchestrator | 2025-06-22 20:03:33 | INFO  | Task 37e6b566-9d3f-4dc6-821f-05d6da068b54 is in state STARTED 2025-06-22 20:03:33.240118 | orchestrator | 2025-06-22 20:03:33 | INFO  | Task 35e7ac7a-81fa-417d-a203-78c2eb4c0a2b is in state STARTED 2025-06-22 20:03:33.240156 | orchestrator | 2025-06-22 20:03:33 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:03:36.287290 | orchestrator | 2025-06-22 20:03:36 | INFO  | Task d1070b7d-00f3-4007-b935-daeb69c70ff8 is in state STARTED 2025-06-22 20:03:36.289844 | orchestrator | 2025-06-22 20:03:36 | INFO  | Task 37e6b566-9d3f-4dc6-821f-05d6da068b54 is in state STARTED 2025-06-22 20:03:36.292578 | orchestrator | 2025-06-22 20:03:36 | INFO  | Task 35e7ac7a-81fa-417d-a203-78c2eb4c0a2b is in state STARTED 2025-06-22 20:03:36.292970 | orchestrator | 2025-06-22 20:03:36 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:03:39.337726 | orchestrator | 2025-06-22 20:03:39 | INFO  | Task d1070b7d-00f3-4007-b935-daeb69c70ff8 is in state STARTED 2025-06-22 20:03:39.339711 | orchestrator | 2025-06-22 20:03:39 | INFO  | Task 37e6b566-9d3f-4dc6-821f-05d6da068b54 is in state STARTED 2025-06-22 20:03:39.341035 | orchestrator | 2025-06-22 20:03:39 | INFO  | Task 35e7ac7a-81fa-417d-a203-78c2eb4c0a2b is in state STARTED 2025-06-22 20:03:39.341266 | orchestrator | 2025-06-22 20:03:39 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:03:42.380982 | orchestrator | 2025-06-22 20:03:42.381068 | orchestrator | 2025-06-22 20:03:42.381234 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-06-22 20:03:42.381256 | orchestrator | 2025-06-22 20:03:42.381267 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-06-22 20:03:42.381280 | orchestrator | Sunday 22 June 2025 20:01:34 +0000 (0:00:00.598) 0:00:00.598 *********** 2025-06-22 20:03:42.381292 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 20:03:42.381304 | orchestrator | 2025-06-22 20:03:42.381316 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-06-22 20:03:42.381406 | orchestrator | Sunday 22 June 2025 20:01:35 +0000 (0:00:00.594) 0:00:01.192 *********** 2025-06-22 20:03:42.381421 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:03:42.381433 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:03:42.381444 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:03:42.381455 | orchestrator | 2025-06-22 20:03:42.381666 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-06-22 20:03:42.381682 | orchestrator | Sunday 22 June 2025 20:01:35 +0000 (0:00:00.685) 0:00:01.878 *********** 2025-06-22 20:03:42.381696 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:03:42.381707 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:03:42.381718 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:03:42.381729 | orchestrator | 2025-06-22 20:03:42.381741 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-06-22 20:03:42.381752 | orchestrator | Sunday 22 June 2025 20:01:36 +0000 (0:00:00.270) 0:00:02.149 *********** 2025-06-22 20:03:42.381787 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:03:42.381799 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:03:42.381809 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:03:42.381820 | orchestrator | 2025-06-22 20:03:42.381832 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-06-22 20:03:42.381843 | orchestrator | Sunday 22 June 2025 20:01:36 +0000 (0:00:00.818) 0:00:02.968 *********** 2025-06-22 20:03:42.381854 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:03:42.381865 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:03:42.381875 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:03:42.381886 | orchestrator | 2025-06-22 20:03:42.381897 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-06-22 20:03:42.381908 | orchestrator | Sunday 22 June 2025 20:01:37 +0000 (0:00:00.305) 0:00:03.273 *********** 2025-06-22 20:03:42.381919 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:03:42.381930 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:03:42.381941 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:03:42.381952 | orchestrator | 2025-06-22 20:03:42.381964 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-06-22 20:03:42.381975 | orchestrator | Sunday 22 June 2025 20:01:37 +0000 (0:00:00.280) 0:00:03.553 *********** 2025-06-22 20:03:42.381986 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:03:42.381997 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:03:42.382008 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:03:42.382076 | orchestrator | 2025-06-22 20:03:42.382090 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-06-22 20:03:42.382101 | orchestrator | Sunday 22 June 2025 20:01:37 +0000 (0:00:00.294) 0:00:03.847 *********** 2025-06-22 20:03:42.382112 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:03:42.382124 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:03:42.382169 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:03:42.382181 | orchestrator | 2025-06-22 20:03:42.382192 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-06-22 20:03:42.382203 | orchestrator | Sunday 22 June 2025 20:01:38 +0000 (0:00:00.474) 0:00:04.322 *********** 2025-06-22 20:03:42.382214 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:03:42.382225 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:03:42.382324 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:03:42.382339 | orchestrator | 2025-06-22 20:03:42.382350 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-06-22 20:03:42.382362 | orchestrator | Sunday 22 June 2025 20:01:38 +0000 (0:00:00.295) 0:00:04.617 *********** 2025-06-22 20:03:42.382373 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-22 20:03:42.382384 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-22 20:03:42.382395 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-22 20:03:42.382406 | orchestrator | 2025-06-22 20:03:42.382417 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-06-22 20:03:42.382428 | orchestrator | Sunday 22 June 2025 20:01:39 +0000 (0:00:00.601) 0:00:05.219 *********** 2025-06-22 20:03:42.382438 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:03:42.382449 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:03:42.382460 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:03:42.382471 | orchestrator | 2025-06-22 20:03:42.382482 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-06-22 20:03:42.382493 | orchestrator | Sunday 22 June 2025 20:01:39 +0000 (0:00:00.415) 0:00:05.635 *********** 2025-06-22 20:03:42.382504 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-22 20:03:42.382515 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-22 20:03:42.382525 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-22 20:03:42.382536 | orchestrator | 2025-06-22 20:03:42.382547 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-06-22 20:03:42.382568 | orchestrator | Sunday 22 June 2025 20:01:41 +0000 (0:00:02.178) 0:00:07.814 *********** 2025-06-22 20:03:42.382580 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-06-22 20:03:42.382591 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-06-22 20:03:42.382602 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-06-22 20:03:42.382613 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:03:42.382623 | orchestrator | 2025-06-22 20:03:42.382634 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-06-22 20:03:42.382665 | orchestrator | Sunday 22 June 2025 20:01:42 +0000 (0:00:00.427) 0:00:08.241 *********** 2025-06-22 20:03:42.382680 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-06-22 20:03:42.382693 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-06-22 20:03:42.382704 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-06-22 20:03:42.382716 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:03:42.382748 | orchestrator | 2025-06-22 20:03:42.382760 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-06-22 20:03:42.382771 | orchestrator | Sunday 22 June 2025 20:01:42 +0000 (0:00:00.753) 0:00:08.994 *********** 2025-06-22 20:03:42.382785 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-22 20:03:42.382799 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-22 20:03:42.382824 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-06-22 20:03:42.382836 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:03:42.382847 | orchestrator | 2025-06-22 20:03:42.382858 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-06-22 20:03:42.382870 | orchestrator | Sunday 22 June 2025 20:01:43 +0000 (0:00:00.167) 0:00:09.161 *********** 2025-06-22 20:03:42.382882 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'b29e59de2599', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-06-22 20:01:40.266432', 'end': '2025-06-22 20:01:40.301002', 'delta': '0:00:00.034570', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['b29e59de2599'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-06-22 20:03:42.382944 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '44161ea800c5', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-06-22 20:01:41.102078', 'end': '2025-06-22 20:01:41.133763', 'delta': '0:00:00.031685', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['44161ea800c5'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-06-22 20:03:42.383027 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '36d6c939f330', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-06-22 20:01:41.633548', 'end': '2025-06-22 20:01:41.674019', 'delta': '0:00:00.040471', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['36d6c939f330'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-06-22 20:03:42.383043 | orchestrator | 2025-06-22 20:03:42.383055 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-06-22 20:03:42.383066 | orchestrator | Sunday 22 June 2025 20:01:43 +0000 (0:00:00.344) 0:00:09.506 *********** 2025-06-22 20:03:42.383077 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:03:42.383088 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:03:42.383099 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:03:42.383110 | orchestrator | 2025-06-22 20:03:42.383121 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-06-22 20:03:42.383157 | orchestrator | Sunday 22 June 2025 20:01:43 +0000 (0:00:00.444) 0:00:09.950 *********** 2025-06-22 20:03:42.383177 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-06-22 20:03:42.383197 | orchestrator | 2025-06-22 20:03:42.383217 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-06-22 20:03:42.383236 | orchestrator | Sunday 22 June 2025 20:01:45 +0000 (0:00:01.691) 0:00:11.641 *********** 2025-06-22 20:03:42.383247 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:03:42.383258 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:03:42.383269 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:03:42.383280 | orchestrator | 2025-06-22 20:03:42.383290 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-06-22 20:03:42.383301 | orchestrator | Sunday 22 June 2025 20:01:45 +0000 (0:00:00.306) 0:00:11.948 *********** 2025-06-22 20:03:42.383312 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:03:42.383323 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:03:42.383334 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:03:42.383345 | orchestrator | 2025-06-22 20:03:42.383356 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-06-22 20:03:42.383376 | orchestrator | Sunday 22 June 2025 20:01:46 +0000 (0:00:00.400) 0:00:12.349 *********** 2025-06-22 20:03:42.383403 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:03:42.383426 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:03:42.383444 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:03:42.383462 | orchestrator | 2025-06-22 20:03:42.383478 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-06-22 20:03:42.383512 | orchestrator | Sunday 22 June 2025 20:01:46 +0000 (0:00:00.460) 0:00:12.809 *********** 2025-06-22 20:03:42.383531 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:03:42.383550 | orchestrator | 2025-06-22 20:03:42.383569 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-06-22 20:03:42.383586 | orchestrator | Sunday 22 June 2025 20:01:46 +0000 (0:00:00.140) 0:00:12.949 *********** 2025-06-22 20:03:42.383604 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:03:42.383621 | orchestrator | 2025-06-22 20:03:42.383641 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-06-22 20:03:42.383660 | orchestrator | Sunday 22 June 2025 20:01:47 +0000 (0:00:00.244) 0:00:13.194 *********** 2025-06-22 20:03:42.383679 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:03:42.383697 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:03:42.383716 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:03:42.383733 | orchestrator | 2025-06-22 20:03:42.383752 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-06-22 20:03:42.383770 | orchestrator | Sunday 22 June 2025 20:01:47 +0000 (0:00:00.270) 0:00:13.464 *********** 2025-06-22 20:03:42.383791 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:03:42.383809 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:03:42.383827 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:03:42.383845 | orchestrator | 2025-06-22 20:03:42.383864 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-06-22 20:03:42.383884 | orchestrator | Sunday 22 June 2025 20:01:47 +0000 (0:00:00.311) 0:00:13.776 *********** 2025-06-22 20:03:42.383902 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:03:42.383921 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:03:42.383940 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:03:42.383959 | orchestrator | 2025-06-22 20:03:42.383978 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-06-22 20:03:42.383997 | orchestrator | Sunday 22 June 2025 20:01:48 +0000 (0:00:00.465) 0:00:14.241 *********** 2025-06-22 20:03:42.384016 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:03:42.384036 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:03:42.384098 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:03:42.384118 | orchestrator | 2025-06-22 20:03:42.384162 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-06-22 20:03:42.384183 | orchestrator | Sunday 22 June 2025 20:01:48 +0000 (0:00:00.284) 0:00:14.525 *********** 2025-06-22 20:03:42.384202 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:03:42.384221 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:03:42.384240 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:03:42.384261 | orchestrator | 2025-06-22 20:03:42.384281 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-06-22 20:03:42.384301 | orchestrator | Sunday 22 June 2025 20:01:48 +0000 (0:00:00.307) 0:00:14.833 *********** 2025-06-22 20:03:42.384319 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:03:42.384339 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:03:42.384359 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:03:42.384377 | orchestrator | 2025-06-22 20:03:42.384436 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-06-22 20:03:42.384477 | orchestrator | Sunday 22 June 2025 20:01:49 +0000 (0:00:00.338) 0:00:15.171 *********** 2025-06-22 20:03:42.384500 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:03:42.384521 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:03:42.384541 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:03:42.384561 | orchestrator | 2025-06-22 20:03:42.384583 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-06-22 20:03:42.384601 | orchestrator | Sunday 22 June 2025 20:01:49 +0000 (0:00:00.483) 0:00:15.655 *********** 2025-06-22 20:03:42.384625 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ffee4eed--4396--59ea--b922--2a73e3bf4ca0-osd--block--ffee4eed--4396--59ea--b922--2a73e3bf4ca0', 'dm-uuid-LVM-q2pqJiTJaKBtTVURKf2CkZXsa09xcwZHqRoverYWgifQ0qT3WozkxbpGh0BMI5p0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-22 20:03:42.384664 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a67f9737--0c9f--5177--b2d5--f4c811291d8a-osd--block--a67f9737--0c9f--5177--b2d5--f4c811291d8a', 'dm-uuid-LVM-x3TzDx78V1LNDIMuCE2wpqhclwbaOD1boL3DxJZRH4sANkbSfFs0yaFLMzXYB0Md'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-22 20:03:42.384685 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:03:42.384707 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:03:42.384756 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:03:42.384776 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:03:42.384795 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:03:42.384833 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:03:42.384856 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:03:42.384887 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:03:42.384913 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2156dda8-7e6f-4624-a0c0-e6117c9c49b9', 'scsi-SQEMU_QEMU_HARDDISK_2156dda8-7e6f-4624-a0c0-e6117c9c49b9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2156dda8-7e6f-4624-a0c0-e6117c9c49b9-part1', 'scsi-SQEMU_QEMU_HARDDISK_2156dda8-7e6f-4624-a0c0-e6117c9c49b9-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2156dda8-7e6f-4624-a0c0-e6117c9c49b9-part14', 'scsi-SQEMU_QEMU_HARDDISK_2156dda8-7e6f-4624-a0c0-e6117c9c49b9-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2156dda8-7e6f-4624-a0c0-e6117c9c49b9-part15', 'scsi-SQEMU_QEMU_HARDDISK_2156dda8-7e6f-4624-a0c0-e6117c9c49b9-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2156dda8-7e6f-4624-a0c0-e6117c9c49b9-part16', 'scsi-SQEMU_QEMU_HARDDISK_2156dda8-7e6f-4624-a0c0-e6117c9c49b9-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 20:03:42.384937 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--420ac1c2--ff56--5c56--8dd6--abe068aa03ad-osd--block--420ac1c2--ff56--5c56--8dd6--abe068aa03ad', 'dm-uuid-LVM-Y0kAmfU0NiERO7ebir8pzkuzt2pW1JrMxsV0ytwr7zHo2RM0YYQz16v8e9RQDgtI'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-22 20:03:42.384974 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--ffee4eed--4396--59ea--b922--2a73e3bf4ca0-osd--block--ffee4eed--4396--59ea--b922--2a73e3bf4ca0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-IuxseD-Xfw3-r21F-YeXt-Y3UB-qXCE-FN37f3', 'scsi-0QEMU_QEMU_HARDDISK_78e15a4e-0b6b-4de0-bd2a-417fc55af8a3', 'scsi-SQEMU_QEMU_HARDDISK_78e15a4e-0b6b-4de0-bd2a-417fc55af8a3'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 20:03:42.385008 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--21b37dc5--48e7--5a6c--9835--121dab35d047-osd--block--21b37dc5--48e7--5a6c--9835--121dab35d047', 'dm-uuid-LVM-KGbfhTUxEBsRwgzYnyRGH4H2jMRLdkjPg0mCD0S4SqmIqU3H31pqPvAoyu6KWzoa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-22 20:03:42.385029 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--a67f9737--0c9f--5177--b2d5--f4c811291d8a-osd--block--a67f9737--0c9f--5177--b2d5--f4c811291d8a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-GsgXLL-fkdW-UDso-mQAJ-mxEf-DjtQ-wplTn3', 'scsi-0QEMU_QEMU_HARDDISK_0d04e2ba-3abe-44e6-a0ea-4a597e46ae81', 'scsi-SQEMU_QEMU_HARDDISK_0d04e2ba-3abe-44e6-a0ea-4a597e46ae81'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 20:03:42.385049 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:03:42.385071 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_67ec265c-9b93-46b0-85f4-348a71cc884e', 'scsi-SQEMU_QEMU_HARDDISK_67ec265c-9b93-46b0-85f4-348a71cc884e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 20:03:42.385092 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-22-19-10-40-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 20:03:42.385113 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:03:42.385209 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:03:42.385256 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:03:42.385279 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:03:42.385298 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:03:42.385318 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:03:42.385339 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:03:42.385357 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:03:42.385397 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_81f2e499-4268-4bd5-a5ff-46d49ba2fab9', 'scsi-SQEMU_QEMU_HARDDISK_81f2e499-4268-4bd5-a5ff-46d49ba2fab9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_81f2e499-4268-4bd5-a5ff-46d49ba2fab9-part1', 'scsi-SQEMU_QEMU_HARDDISK_81f2e499-4268-4bd5-a5ff-46d49ba2fab9-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_81f2e499-4268-4bd5-a5ff-46d49ba2fab9-part14', 'scsi-SQEMU_QEMU_HARDDISK_81f2e499-4268-4bd5-a5ff-46d49ba2fab9-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_81f2e499-4268-4bd5-a5ff-46d49ba2fab9-part15', 'scsi-SQEMU_QEMU_HARDDISK_81f2e499-4268-4bd5-a5ff-46d49ba2fab9-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_81f2e499-4268-4bd5-a5ff-46d49ba2fab9-part16', 'scsi-SQEMU_QEMU_HARDDISK_81f2e499-4268-4bd5-a5ff-46d49ba2fab9-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 20:03:42.385432 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--420ac1c2--ff56--5c56--8dd6--abe068aa03ad-osd--block--420ac1c2--ff56--5c56--8dd6--abe068aa03ad'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-sbJf2j-kvL6-D9Rj-g0g6-1ewV-GQ1i-ToEso7', 'scsi-0QEMU_QEMU_HARDDISK_1702d6d9-f6d5-467e-9c44-3c93c3ac891d', 'scsi-SQEMU_QEMU_HARDDISK_1702d6d9-f6d5-467e-9c44-3c93c3ac891d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 20:03:42.385453 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--21b37dc5--48e7--5a6c--9835--121dab35d047-osd--block--21b37dc5--48e7--5a6c--9835--121dab35d047'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-npdezT-nudS-n4QM-0RVM-No3f-37Al-1TAEZN', 'scsi-0QEMU_QEMU_HARDDISK_a49b6e77-acd0-4f36-887b-4e4ec75cdfa4', 'scsi-SQEMU_QEMU_HARDDISK_a49b6e77-acd0-4f36-887b-4e4ec75cdfa4'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 20:03:42.385474 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3108d6cc--64da--58c4--8e22--262ec3caa421-osd--block--3108d6cc--64da--58c4--8e22--262ec3caa421', 'dm-uuid-LVM-e5DdA2Z5zV4nVqLZXYU1m9FdEuPJHovfpODsitfcXz282rKkjlJ6PtJholW3GnT0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-22 20:03:42.385494 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bbdef6ad-891d-4656-ac9b-bc24d19b561e', 'scsi-SQEMU_QEMU_HARDDISK_bbdef6ad-891d-4656-ac9b-bc24d19b561e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 20:03:42.385515 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--39fb6ae0--c3e6--59b9--8b54--9251bb7c5136-osd--block--39fb6ae0--c3e6--59b9--8b54--9251bb7c5136', 'dm-uuid-LVM-QvjYzXAhLLqss15EXzQLnxByVXB2B3Avm21BSOWD1Pj2v7DyYWPJ4bc0YTU2RwoR'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-06-22 20:03:42.385551 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-22-19-10-42-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 20:03:42.385583 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:03:42.385604 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:03:42.385623 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:03:42.385641 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:03:42.385660 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:03:42.385680 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:03:42.385701 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:03:42.385720 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:03:42.385739 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-06-22 20:03:42.385815 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_abc8cf8e-645f-44ba-8ef9-2fedd7dd22d1', 'scsi-SQEMU_QEMU_HARDDISK_abc8cf8e-645f-44ba-8ef9-2fedd7dd22d1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_abc8cf8e-645f-44ba-8ef9-2fedd7dd22d1-part1', 'scsi-SQEMU_QEMU_HARDDISK_abc8cf8e-645f-44ba-8ef9-2fedd7dd22d1-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_abc8cf8e-645f-44ba-8ef9-2fedd7dd22d1-part14', 'scsi-SQEMU_QEMU_HARDDISK_abc8cf8e-645f-44ba-8ef9-2fedd7dd22d1-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_abc8cf8e-645f-44ba-8ef9-2fedd7dd22d1-part15', 'scsi-SQEMU_QEMU_HARDDISK_abc8cf8e-645f-44ba-8ef9-2fedd7dd22d1-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_abc8cf8e-645f-44ba-8ef9-2fedd7dd22d1-part16', 'scsi-SQEMU_QEMU_HARDDISK_abc8cf8e-645f-44ba-8ef9-2fedd7dd22d1-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 20:03:42.385838 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--3108d6cc--64da--58c4--8e22--262ec3caa421-osd--block--3108d6cc--64da--58c4--8e22--262ec3caa421'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-znpq1l-mumb-iL0o-0L13-f9Ix-i3q0-hh06ar', 'scsi-0QEMU_QEMU_HARDDISK_b25991b3-37fd-407a-b13b-d136271ca727', 'scsi-SQEMU_QEMU_HARDDISK_b25991b3-37fd-407a-b13b-d136271ca727'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 20:03:42.385856 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--39fb6ae0--c3e6--59b9--8b54--9251bb7c5136-osd--block--39fb6ae0--c3e6--59b9--8b54--9251bb7c5136'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-hjCmUu-UDSS-ItuD-E86h-K8ZU-GPJi-vSzINW', 'scsi-0QEMU_QEMU_HARDDISK_71e43d47-057b-4609-853f-9ccf72c5a295', 'scsi-SQEMU_QEMU_HARDDISK_71e43d47-057b-4609-853f-9ccf72c5a295'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 20:03:42.385876 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_61868cbd-84da-463e-9017-284301fda41a', 'scsi-SQEMU_QEMU_HARDDISK_61868cbd-84da-463e-9017-284301fda41a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 20:03:42.385919 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-22-19-10-47-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-06-22 20:03:42.385941 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:03:42.385961 | orchestrator | 2025-06-22 20:03:42.385981 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-06-22 20:03:42.385999 | orchestrator | Sunday 22 June 2025 20:01:50 +0000 (0:00:00.564) 0:00:16.220 *********** 2025-06-22 20:03:42.386109 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--ffee4eed--4396--59ea--b922--2a73e3bf4ca0-osd--block--ffee4eed--4396--59ea--b922--2a73e3bf4ca0', 'dm-uuid-LVM-q2pqJiTJaKBtTVURKf2CkZXsa09xcwZHqRoverYWgifQ0qT3WozkxbpGh0BMI5p0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:03:42.386160 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a67f9737--0c9f--5177--b2d5--f4c811291d8a-osd--block--a67f9737--0c9f--5177--b2d5--f4c811291d8a', 'dm-uuid-LVM-x3TzDx78V1LNDIMuCE2wpqhclwbaOD1boL3DxJZRH4sANkbSfFs0yaFLMzXYB0Md'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:03:42.386182 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:03:42.386203 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:03:42.386238 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:03:42.386278 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:03:42.386300 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:03:42.386320 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:03:42.386341 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:03:42.386360 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--420ac1c2--ff56--5c56--8dd6--abe068aa03ad-osd--block--420ac1c2--ff56--5c56--8dd6--abe068aa03ad', 'dm-uuid-LVM-Y0kAmfU0NiERO7ebir8pzkuzt2pW1JrMxsV0ytwr7zHo2RM0YYQz16v8e9RQDgtI'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:03:42.386392 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:03:42.386428 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--21b37dc5--48e7--5a6c--9835--121dab35d047-osd--block--21b37dc5--48e7--5a6c--9835--121dab35d047', 'dm-uuid-LVM-KGbfhTUxEBsRwgzYnyRGH4H2jMRLdkjPg0mCD0S4SqmIqU3H31pqPvAoyu6KWzoa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:03:42.386451 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2156dda8-7e6f-4624-a0c0-e6117c9c49b9', 'scsi-SQEMU_QEMU_HARDDISK_2156dda8-7e6f-4624-a0c0-e6117c9c49b9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2156dda8-7e6f-4624-a0c0-e6117c9c49b9-part1', 'scsi-SQEMU_QEMU_HARDDISK_2156dda8-7e6f-4624-a0c0-e6117c9c49b9-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2156dda8-7e6f-4624-a0c0-e6117c9c49b9-part14', 'scsi-SQEMU_QEMU_HARDDISK_2156dda8-7e6f-4624-a0c0-e6117c9c49b9-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2156dda8-7e6f-4624-a0c0-e6117c9c49b9-part15', 'scsi-SQEMU_QEMU_HARDDISK_2156dda8-7e6f-4624-a0c0-e6117c9c49b9-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2156dda8-7e6f-4624-a0c0-e6117c9c49b9-part16', 'scsi-SQEMU_QEMU_HARDDISK_2156dda8-7e6f-4624-a0c0-e6117c9c49b9-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:03:42.386473 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:03:42.386510 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--ffee4eed--4396--59ea--b922--2a73e3bf4ca0-osd--block--ffee4eed--4396--59ea--b922--2a73e3bf4ca0'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-IuxseD-Xfw3-r21F-YeXt-Y3UB-qXCE-FN37f3', 'scsi-0QEMU_QEMU_HARDDISK_78e15a4e-0b6b-4de0-bd2a-417fc55af8a3', 'scsi-SQEMU_QEMU_HARDDISK_78e15a4e-0b6b-4de0-bd2a-417fc55af8a3'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:03:42.386543 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:03:42.386564 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--a67f9737--0c9f--5177--b2d5--f4c811291d8a-osd--block--a67f9737--0c9f--5177--b2d5--f4c811291d8a'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-GsgXLL-fkdW-UDso-mQAJ-mxEf-DjtQ-wplTn3', 'scsi-0QEMU_QEMU_HARDDISK_0d04e2ba-3abe-44e6-a0ea-4a597e46ae81', 'scsi-SQEMU_QEMU_HARDDISK_0d04e2ba-3abe-44e6-a0ea-4a597e46ae81'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:03:42.386578 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:03:42.386590 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_67ec265c-9b93-46b0-85f4-348a71cc884e', 'scsi-SQEMU_QEMU_HARDDISK_67ec265c-9b93-46b0-85f4-348a71cc884e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:03:42.386612 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-22-19-10-40-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:03:42.386637 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:03:42.386649 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:03:42.386660 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:03:42.386672 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:03:42.386685 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:03:42.386705 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:03:42.386750 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_81f2e499-4268-4bd5-a5ff-46d49ba2fab9', 'scsi-SQEMU_QEMU_HARDDISK_81f2e499-4268-4bd5-a5ff-46d49ba2fab9'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_81f2e499-4268-4bd5-a5ff-46d49ba2fab9-part1', 'scsi-SQEMU_QEMU_HARDDISK_81f2e499-4268-4bd5-a5ff-46d49ba2fab9-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_81f2e499-4268-4bd5-a5ff-46d49ba2fab9-part14', 'scsi-SQEMU_QEMU_HARDDISK_81f2e499-4268-4bd5-a5ff-46d49ba2fab9-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_81f2e499-4268-4bd5-a5ff-46d49ba2fab9-part15', 'scsi-SQEMU_QEMU_HARDDISK_81f2e499-4268-4bd5-a5ff-46d49ba2fab9-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_81f2e499-4268-4bd5-a5ff-46d49ba2fab9-part16', 'scsi-SQEMU_QEMU_HARDDISK_81f2e499-4268-4bd5-a5ff-46d49ba2fab9-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:03:42.386771 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--420ac1c2--ff56--5c56--8dd6--abe068aa03ad-osd--block--420ac1c2--ff56--5c56--8dd6--abe068aa03ad'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-sbJf2j-kvL6-D9Rj-g0g6-1ewV-GQ1i-ToEso7', 'scsi-0QEMU_QEMU_HARDDISK_1702d6d9-f6d5-467e-9c44-3c93c3ac891d', 'scsi-SQEMU_QEMU_HARDDISK_1702d6d9-f6d5-467e-9c44-3c93c3ac891d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:03:42.386790 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3108d6cc--64da--58c4--8e22--262ec3caa421-osd--block--3108d6cc--64da--58c4--8e22--262ec3caa421', 'dm-uuid-LVM-e5DdA2Z5zV4nVqLZXYU1m9FdEuPJHovfpODsitfcXz282rKkjlJ6PtJholW3GnT0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:03:42.386817 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--21b37dc5--48e7--5a6c--9835--121dab35d047-osd--block--21b37dc5--48e7--5a6c--9835--121dab35d047'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-npdezT-nudS-n4QM-0RVM-No3f-37Al-1TAEZN', 'scsi-0QEMU_QEMU_HARDDISK_a49b6e77-acd0-4f36-887b-4e4ec75cdfa4', 'scsi-SQEMU_QEMU_HARDDISK_a49b6e77-acd0-4f36-887b-4e4ec75cdfa4'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:03:42.386848 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_bbdef6ad-891d-4656-ac9b-bc24d19b561e', 'scsi-SQEMU_QEMU_HARDDISK_bbdef6ad-891d-4656-ac9b-bc24d19b561e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:03:42.386868 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-22-19-10-42-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:03:42.386886 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--39fb6ae0--c3e6--59b9--8b54--9251bb7c5136-osd--block--39fb6ae0--c3e6--59b9--8b54--9251bb7c5136', 'dm-uuid-LVM-QvjYzXAhLLqss15EXzQLnxByVXB2B3Avm21BSOWD1Pj2v7DyYWPJ4bc0YTU2RwoR'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:03:42.386902 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:03:42.386921 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:03:42.386947 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:03:42.386966 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:03:42.386998 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:03:42.387018 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:03:42.387064 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:03:42.387082 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:03:42.387111 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:03:42.387170 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_abc8cf8e-645f-44ba-8ef9-2fedd7dd22d1', 'scsi-SQEMU_QEMU_HARDDISK_abc8cf8e-645f-44ba-8ef9-2fedd7dd22d1'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_abc8cf8e-645f-44ba-8ef9-2fedd7dd22d1-part1', 'scsi-SQEMU_QEMU_HARDDISK_abc8cf8e-645f-44ba-8ef9-2fedd7dd22d1-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_abc8cf8e-645f-44ba-8ef9-2fedd7dd22d1-part14', 'scsi-SQEMU_QEMU_HARDDISK_abc8cf8e-645f-44ba-8ef9-2fedd7dd22d1-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_abc8cf8e-645f-44ba-8ef9-2fedd7dd22d1-part15', 'scsi-SQEMU_QEMU_HARDDISK_abc8cf8e-645f-44ba-8ef9-2fedd7dd22d1-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_abc8cf8e-645f-44ba-8ef9-2fedd7dd22d1-part16', 'scsi-SQEMU_QEMU_HARDDISK_abc8cf8e-645f-44ba-8ef9-2fedd7dd22d1-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:03:42.387194 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--3108d6cc--64da--58c4--8e22--262ec3caa421-osd--block--3108d6cc--64da--58c4--8e22--262ec3caa421'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-znpq1l-mumb-iL0o-0L13-f9Ix-i3q0-hh06ar', 'scsi-0QEMU_QEMU_HARDDISK_b25991b3-37fd-407a-b13b-d136271ca727', 'scsi-SQEMU_QEMU_HARDDISK_b25991b3-37fd-407a-b13b-d136271ca727'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:03:42.387223 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--39fb6ae0--c3e6--59b9--8b54--9251bb7c5136-osd--block--39fb6ae0--c3e6--59b9--8b54--9251bb7c5136'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-hjCmUu-UDSS-ItuD-E86h-K8ZU-GPJi-vSzINW', 'scsi-0QEMU_QEMU_HARDDISK_71e43d47-057b-4609-853f-9ccf72c5a295', 'scsi-SQEMU_QEMU_HARDDISK_71e43d47-057b-4609-853f-9ccf72c5a295'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:03:42.387241 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_61868cbd-84da-463e-9017-284301fda41a', 'scsi-SQEMU_QEMU_HARDDISK_61868cbd-84da-463e-9017-284301fda41a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:03:42.387279 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-06-22-19-10-47-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-06-22 20:03:42.387298 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:03:42.387316 | orchestrator | 2025-06-22 20:03:42.387332 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-06-22 20:03:42.387349 | orchestrator | Sunday 22 June 2025 20:01:50 +0000 (0:00:00.666) 0:00:16.886 *********** 2025-06-22 20:03:42.387367 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:03:42.387383 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:03:42.387400 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:03:42.387418 | orchestrator | 2025-06-22 20:03:42.387434 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-06-22 20:03:42.387450 | orchestrator | Sunday 22 June 2025 20:01:51 +0000 (0:00:00.770) 0:00:17.656 *********** 2025-06-22 20:03:42.387466 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:03:42.387483 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:03:42.387500 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:03:42.387517 | orchestrator | 2025-06-22 20:03:42.387535 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-06-22 20:03:42.387551 | orchestrator | Sunday 22 June 2025 20:01:52 +0000 (0:00:00.453) 0:00:18.110 *********** 2025-06-22 20:03:42.387568 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:03:42.387585 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:03:42.387600 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:03:42.387617 | orchestrator | 2025-06-22 20:03:42.387635 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-06-22 20:03:42.387663 | orchestrator | Sunday 22 June 2025 20:01:52 +0000 (0:00:00.684) 0:00:18.794 *********** 2025-06-22 20:03:42.387673 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:03:42.387683 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:03:42.387693 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:03:42.387703 | orchestrator | 2025-06-22 20:03:42.387713 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-06-22 20:03:42.387722 | orchestrator | Sunday 22 June 2025 20:01:53 +0000 (0:00:00.299) 0:00:19.093 *********** 2025-06-22 20:03:42.387732 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:03:42.387742 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:03:42.387751 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:03:42.387761 | orchestrator | 2025-06-22 20:03:42.387770 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-06-22 20:03:42.387780 | orchestrator | Sunday 22 June 2025 20:01:53 +0000 (0:00:00.403) 0:00:19.496 *********** 2025-06-22 20:03:42.387796 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:03:42.387811 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:03:42.387827 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:03:42.387844 | orchestrator | 2025-06-22 20:03:42.387862 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-06-22 20:03:42.387879 | orchestrator | Sunday 22 June 2025 20:01:54 +0000 (0:00:00.516) 0:00:20.013 *********** 2025-06-22 20:03:42.387895 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-06-22 20:03:42.387911 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-06-22 20:03:42.387946 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-06-22 20:03:42.387965 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-06-22 20:03:42.387981 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-06-22 20:03:42.387998 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-06-22 20:03:42.388016 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-06-22 20:03:42.388031 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-06-22 20:03:42.388047 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-06-22 20:03:42.388083 | orchestrator | 2025-06-22 20:03:42.388101 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-06-22 20:03:42.388118 | orchestrator | Sunday 22 June 2025 20:01:55 +0000 (0:00:01.129) 0:00:21.143 *********** 2025-06-22 20:03:42.388249 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-06-22 20:03:42.388302 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-06-22 20:03:42.388319 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-06-22 20:03:42.388336 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:03:42.388352 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-06-22 20:03:42.388367 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-06-22 20:03:42.388382 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-06-22 20:03:42.388398 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:03:42.388416 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-06-22 20:03:42.388433 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-06-22 20:03:42.388450 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-06-22 20:03:42.388465 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:03:42.388482 | orchestrator | 2025-06-22 20:03:42.388499 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-06-22 20:03:42.388515 | orchestrator | Sunday 22 June 2025 20:01:55 +0000 (0:00:00.349) 0:00:21.493 *********** 2025-06-22 20:03:42.388532 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 20:03:42.388549 | orchestrator | 2025-06-22 20:03:42.388565 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-06-22 20:03:42.388603 | orchestrator | Sunday 22 June 2025 20:01:56 +0000 (0:00:00.668) 0:00:22.161 *********** 2025-06-22 20:03:42.388614 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:03:42.388624 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:03:42.388641 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:03:42.388651 | orchestrator | 2025-06-22 20:03:42.388674 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-06-22 20:03:42.388684 | orchestrator | Sunday 22 June 2025 20:01:56 +0000 (0:00:00.310) 0:00:22.472 *********** 2025-06-22 20:03:42.388694 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:03:42.388704 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:03:42.388714 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:03:42.388723 | orchestrator | 2025-06-22 20:03:42.388733 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-06-22 20:03:42.388743 | orchestrator | Sunday 22 June 2025 20:01:56 +0000 (0:00:00.308) 0:00:22.780 *********** 2025-06-22 20:03:42.388752 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:03:42.388762 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:03:42.388771 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:03:42.388781 | orchestrator | 2025-06-22 20:03:42.388791 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-06-22 20:03:42.388800 | orchestrator | Sunday 22 June 2025 20:01:57 +0000 (0:00:00.331) 0:00:23.112 *********** 2025-06-22 20:03:42.388810 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:03:42.388820 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:03:42.388829 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:03:42.388838 | orchestrator | 2025-06-22 20:03:42.388848 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-06-22 20:03:42.388858 | orchestrator | Sunday 22 June 2025 20:01:57 +0000 (0:00:00.623) 0:00:23.735 *********** 2025-06-22 20:03:42.388867 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-22 20:03:42.388877 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-22 20:03:42.388887 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-22 20:03:42.388896 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:03:42.388905 | orchestrator | 2025-06-22 20:03:42.388915 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-06-22 20:03:42.388925 | orchestrator | Sunday 22 June 2025 20:01:58 +0000 (0:00:00.371) 0:00:24.107 *********** 2025-06-22 20:03:42.388935 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-22 20:03:42.388944 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-22 20:03:42.388954 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-22 20:03:42.388963 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:03:42.388973 | orchestrator | 2025-06-22 20:03:42.388982 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-06-22 20:03:42.388992 | orchestrator | Sunday 22 June 2025 20:01:58 +0000 (0:00:00.423) 0:00:24.531 *********** 2025-06-22 20:03:42.389002 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-06-22 20:03:42.389011 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-06-22 20:03:42.389021 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-06-22 20:03:42.389031 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:03:42.389040 | orchestrator | 2025-06-22 20:03:42.389050 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-06-22 20:03:42.389060 | orchestrator | Sunday 22 June 2025 20:01:58 +0000 (0:00:00.376) 0:00:24.907 *********** 2025-06-22 20:03:42.389069 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:03:42.389079 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:03:42.389089 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:03:42.389098 | orchestrator | 2025-06-22 20:03:42.389115 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-06-22 20:03:42.389150 | orchestrator | Sunday 22 June 2025 20:01:59 +0000 (0:00:00.343) 0:00:25.251 *********** 2025-06-22 20:03:42.389178 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-06-22 20:03:42.389191 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-06-22 20:03:42.389205 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-06-22 20:03:42.389219 | orchestrator | 2025-06-22 20:03:42.389232 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-06-22 20:03:42.389245 | orchestrator | Sunday 22 June 2025 20:01:59 +0000 (0:00:00.484) 0:00:25.736 *********** 2025-06-22 20:03:42.389260 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-22 20:03:42.389274 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-22 20:03:42.389287 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-22 20:03:42.389300 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-06-22 20:03:42.389314 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-06-22 20:03:42.389326 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-06-22 20:03:42.389340 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-06-22 20:03:42.389354 | orchestrator | 2025-06-22 20:03:42.389367 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-06-22 20:03:42.389380 | orchestrator | Sunday 22 June 2025 20:02:00 +0000 (0:00:00.936) 0:00:26.672 *********** 2025-06-22 20:03:42.389394 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-06-22 20:03:42.389409 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-06-22 20:03:42.389423 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-06-22 20:03:42.389437 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-06-22 20:03:42.389450 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-06-22 20:03:42.389463 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-06-22 20:03:42.389485 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-06-22 20:03:42.389500 | orchestrator | 2025-06-22 20:03:42.389522 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-06-22 20:03:42.389537 | orchestrator | Sunday 22 June 2025 20:02:02 +0000 (0:00:01.854) 0:00:28.526 *********** 2025-06-22 20:03:42.389552 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:03:42.389568 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:03:42.389582 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-06-22 20:03:42.389595 | orchestrator | 2025-06-22 20:03:42.389610 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-06-22 20:03:42.389623 | orchestrator | Sunday 22 June 2025 20:02:02 +0000 (0:00:00.374) 0:00:28.901 *********** 2025-06-22 20:03:42.389638 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-22 20:03:42.389653 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-22 20:03:42.389668 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-22 20:03:42.389684 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-22 20:03:42.389692 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-06-22 20:03:42.389700 | orchestrator | 2025-06-22 20:03:42.389714 | orchestrator | TASK [generate keys] *********************************************************** 2025-06-22 20:03:42.389728 | orchestrator | Sunday 22 June 2025 20:02:48 +0000 (0:00:45.476) 0:01:14.378 *********** 2025-06-22 20:03:42.389741 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 20:03:42.389755 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 20:03:42.389769 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 20:03:42.389782 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 20:03:42.389796 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 20:03:42.389808 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 20:03:42.389822 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-06-22 20:03:42.389835 | orchestrator | 2025-06-22 20:03:42.389847 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-06-22 20:03:42.389862 | orchestrator | Sunday 22 June 2025 20:03:12 +0000 (0:00:24.013) 0:01:38.392 *********** 2025-06-22 20:03:42.389876 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 20:03:42.389889 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 20:03:42.389902 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 20:03:42.389916 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 20:03:42.389930 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 20:03:42.389944 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 20:03:42.389958 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-06-22 20:03:42.389971 | orchestrator | 2025-06-22 20:03:42.389984 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-06-22 20:03:42.389999 | orchestrator | Sunday 22 June 2025 20:03:24 +0000 (0:00:12.117) 0:01:50.509 *********** 2025-06-22 20:03:42.390013 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 20:03:42.390067 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-22 20:03:42.390083 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-22 20:03:42.390098 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 20:03:42.390113 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-22 20:03:42.390190 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-22 20:03:42.390222 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 20:03:42.390238 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-22 20:03:42.390253 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-22 20:03:42.390269 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 20:03:42.390293 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-22 20:03:42.390310 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-22 20:03:42.390326 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 20:03:42.390341 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-22 20:03:42.390357 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-22 20:03:42.390372 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-06-22 20:03:42.390388 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-06-22 20:03:42.390405 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-06-22 20:03:42.390420 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-06-22 20:03:42.390435 | orchestrator | 2025-06-22 20:03:42.390450 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 20:03:42.390464 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2025-06-22 20:03:42.390479 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-06-22 20:03:42.390492 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-06-22 20:03:42.390508 | orchestrator | 2025-06-22 20:03:42.390523 | orchestrator | 2025-06-22 20:03:42.390536 | orchestrator | 2025-06-22 20:03:42.390550 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 20:03:42.390565 | orchestrator | Sunday 22 June 2025 20:03:41 +0000 (0:00:17.081) 0:02:07.591 *********** 2025-06-22 20:03:42.390579 | orchestrator | =============================================================================== 2025-06-22 20:03:42.390594 | orchestrator | create openstack pool(s) ----------------------------------------------- 45.48s 2025-06-22 20:03:42.390610 | orchestrator | generate keys ---------------------------------------------------------- 24.01s 2025-06-22 20:03:42.390625 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.08s 2025-06-22 20:03:42.390639 | orchestrator | get keys from monitors ------------------------------------------------- 12.12s 2025-06-22 20:03:42.390653 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.18s 2025-06-22 20:03:42.390668 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.85s 2025-06-22 20:03:42.390683 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.69s 2025-06-22 20:03:42.390698 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 1.13s 2025-06-22 20:03:42.390711 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.94s 2025-06-22 20:03:42.390725 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.82s 2025-06-22 20:03:42.390740 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.77s 2025-06-22 20:03:42.390756 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.75s 2025-06-22 20:03:42.390770 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.69s 2025-06-22 20:03:42.390784 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.68s 2025-06-22 20:03:42.390798 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.67s 2025-06-22 20:03:42.390811 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.67s 2025-06-22 20:03:42.390824 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.62s 2025-06-22 20:03:42.390837 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.60s 2025-06-22 20:03:42.390860 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.59s 2025-06-22 20:03:42.390874 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.56s 2025-06-22 20:03:42.390890 | orchestrator | 2025-06-22 20:03:42 | INFO  | Task d1070b7d-00f3-4007-b935-daeb69c70ff8 is in state SUCCESS 2025-06-22 20:03:42.390905 | orchestrator | 2025-06-22 20:03:42 | INFO  | Task 37e6b566-9d3f-4dc6-821f-05d6da068b54 is in state STARTED 2025-06-22 20:03:42.390920 | orchestrator | 2025-06-22 20:03:42 | INFO  | Task 35e7ac7a-81fa-417d-a203-78c2eb4c0a2b is in state STARTED 2025-06-22 20:03:42.390937 | orchestrator | 2025-06-22 20:03:42 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:03:45.439950 | orchestrator | 2025-06-22 20:03:45 | INFO  | Task 6dd39303-6da0-472f-9364-5cdcc2f82597 is in state STARTED 2025-06-22 20:03:45.441582 | orchestrator | 2025-06-22 20:03:45 | INFO  | Task 37e6b566-9d3f-4dc6-821f-05d6da068b54 is in state STARTED 2025-06-22 20:03:45.442949 | orchestrator | 2025-06-22 20:03:45 | INFO  | Task 35e7ac7a-81fa-417d-a203-78c2eb4c0a2b is in state STARTED 2025-06-22 20:03:45.443198 | orchestrator | 2025-06-22 20:03:45 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:03:48.488031 | orchestrator | 2025-06-22 20:03:48 | INFO  | Task 6dd39303-6da0-472f-9364-5cdcc2f82597 is in state STARTED 2025-06-22 20:03:48.490009 | orchestrator | 2025-06-22 20:03:48 | INFO  | Task 37e6b566-9d3f-4dc6-821f-05d6da068b54 is in state STARTED 2025-06-22 20:03:48.491468 | orchestrator | 2025-06-22 20:03:48 | INFO  | Task 35e7ac7a-81fa-417d-a203-78c2eb4c0a2b is in state STARTED 2025-06-22 20:03:48.491491 | orchestrator | 2025-06-22 20:03:48 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:03:51.532657 | orchestrator | 2025-06-22 20:03:51 | INFO  | Task 6dd39303-6da0-472f-9364-5cdcc2f82597 is in state STARTED 2025-06-22 20:03:51.537318 | orchestrator | 2025-06-22 20:03:51 | INFO  | Task 37e6b566-9d3f-4dc6-821f-05d6da068b54 is in state STARTED 2025-06-22 20:03:51.539276 | orchestrator | 2025-06-22 20:03:51 | INFO  | Task 35e7ac7a-81fa-417d-a203-78c2eb4c0a2b is in state STARTED 2025-06-22 20:03:51.539869 | orchestrator | 2025-06-22 20:03:51 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:03:54.599273 | orchestrator | 2025-06-22 20:03:54 | INFO  | Task 6dd39303-6da0-472f-9364-5cdcc2f82597 is in state STARTED 2025-06-22 20:03:54.601094 | orchestrator | 2025-06-22 20:03:54 | INFO  | Task 37e6b566-9d3f-4dc6-821f-05d6da068b54 is in state STARTED 2025-06-22 20:03:54.604718 | orchestrator | 2025-06-22 20:03:54 | INFO  | Task 35e7ac7a-81fa-417d-a203-78c2eb4c0a2b is in state STARTED 2025-06-22 20:03:54.604767 | orchestrator | 2025-06-22 20:03:54 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:03:57.651066 | orchestrator | 2025-06-22 20:03:57 | INFO  | Task 6dd39303-6da0-472f-9364-5cdcc2f82597 is in state STARTED 2025-06-22 20:03:57.654901 | orchestrator | 2025-06-22 20:03:57 | INFO  | Task 37e6b566-9d3f-4dc6-821f-05d6da068b54 is in state STARTED 2025-06-22 20:03:57.657943 | orchestrator | 2025-06-22 20:03:57 | INFO  | Task 35e7ac7a-81fa-417d-a203-78c2eb4c0a2b is in state STARTED 2025-06-22 20:03:57.658003 | orchestrator | 2025-06-22 20:03:57 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:04:00.698377 | orchestrator | 2025-06-22 20:04:00 | INFO  | Task 6dd39303-6da0-472f-9364-5cdcc2f82597 is in state STARTED 2025-06-22 20:04:00.700312 | orchestrator | 2025-06-22 20:04:00 | INFO  | Task 37e6b566-9d3f-4dc6-821f-05d6da068b54 is in state STARTED 2025-06-22 20:04:00.701618 | orchestrator | 2025-06-22 20:04:00 | INFO  | Task 35e7ac7a-81fa-417d-a203-78c2eb4c0a2b is in state STARTED 2025-06-22 20:04:00.701739 | orchestrator | 2025-06-22 20:04:00 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:04:03.747447 | orchestrator | 2025-06-22 20:04:03 | INFO  | Task 6dd39303-6da0-472f-9364-5cdcc2f82597 is in state STARTED 2025-06-22 20:04:03.748784 | orchestrator | 2025-06-22 20:04:03 | INFO  | Task 37e6b566-9d3f-4dc6-821f-05d6da068b54 is in state STARTED 2025-06-22 20:04:03.751217 | orchestrator | 2025-06-22 20:04:03 | INFO  | Task 35e7ac7a-81fa-417d-a203-78c2eb4c0a2b is in state STARTED 2025-06-22 20:04:03.751326 | orchestrator | 2025-06-22 20:04:03 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:04:06.808632 | orchestrator | 2025-06-22 20:04:06 | INFO  | Task 6dd39303-6da0-472f-9364-5cdcc2f82597 is in state STARTED 2025-06-22 20:04:06.808729 | orchestrator | 2025-06-22 20:04:06 | INFO  | Task 37e6b566-9d3f-4dc6-821f-05d6da068b54 is in state STARTED 2025-06-22 20:04:06.808745 | orchestrator | 2025-06-22 20:04:06 | INFO  | Task 35e7ac7a-81fa-417d-a203-78c2eb4c0a2b is in state STARTED 2025-06-22 20:04:06.808757 | orchestrator | 2025-06-22 20:04:06 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:04:09.843241 | orchestrator | 2025-06-22 20:04:09 | INFO  | Task 6dd39303-6da0-472f-9364-5cdcc2f82597 is in state SUCCESS 2025-06-22 20:04:09.845019 | orchestrator | 2025-06-22 20:04:09 | INFO  | Task 37e6b566-9d3f-4dc6-821f-05d6da068b54 is in state STARTED 2025-06-22 20:04:09.847958 | orchestrator | 2025-06-22 20:04:09 | INFO  | Task 35e7ac7a-81fa-417d-a203-78c2eb4c0a2b is in state SUCCESS 2025-06-22 20:04:09.849965 | orchestrator | 2025-06-22 20:04:09.849990 | orchestrator | 2025-06-22 20:04:09.850003 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-06-22 20:04:09.850061 | orchestrator | 2025-06-22 20:04:09.850075 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2025-06-22 20:04:09.850086 | orchestrator | Sunday 22 June 2025 20:03:45 +0000 (0:00:00.159) 0:00:00.159 *********** 2025-06-22 20:04:09.850112 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2025-06-22 20:04:09.850125 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-06-22 20:04:09.850165 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-06-22 20:04:09.850186 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2025-06-22 20:04:09.850204 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-06-22 20:04:09.850220 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2025-06-22 20:04:09.850231 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2025-06-22 20:04:09.850242 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2025-06-22 20:04:09.850253 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2025-06-22 20:04:09.850264 | orchestrator | 2025-06-22 20:04:09.850275 | orchestrator | TASK [Create share directory] ************************************************** 2025-06-22 20:04:09.850286 | orchestrator | Sunday 22 June 2025 20:03:49 +0000 (0:00:03.972) 0:00:04.131 *********** 2025-06-22 20:04:09.850298 | orchestrator | changed: [testbed-manager -> localhost] 2025-06-22 20:04:09.850309 | orchestrator | 2025-06-22 20:04:09.850320 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2025-06-22 20:04:09.850331 | orchestrator | Sunday 22 June 2025 20:03:50 +0000 (0:00:00.957) 0:00:05.089 *********** 2025-06-22 20:04:09.850342 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-06-22 20:04:09.850353 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-06-22 20:04:09.850384 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-06-22 20:04:09.850395 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-06-22 20:04:09.850406 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-06-22 20:04:09.850416 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-06-22 20:04:09.850427 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-06-22 20:04:09.850438 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-06-22 20:04:09.850449 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-06-22 20:04:09.850459 | orchestrator | 2025-06-22 20:04:09.850470 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2025-06-22 20:04:09.850481 | orchestrator | Sunday 22 June 2025 20:04:02 +0000 (0:00:12.204) 0:00:17.294 *********** 2025-06-22 20:04:09.850492 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2025-06-22 20:04:09.850503 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-06-22 20:04:09.850514 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-06-22 20:04:09.850525 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2025-06-22 20:04:09.850536 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-06-22 20:04:09.850546 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2025-06-22 20:04:09.850557 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2025-06-22 20:04:09.850568 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2025-06-22 20:04:09.850581 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2025-06-22 20:04:09.850595 | orchestrator | 2025-06-22 20:04:09.850608 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 20:04:09.850621 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 20:04:09.850635 | orchestrator | 2025-06-22 20:04:09.850648 | orchestrator | 2025-06-22 20:04:09.850661 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 20:04:09.850673 | orchestrator | Sunday 22 June 2025 20:04:08 +0000 (0:00:05.845) 0:00:23.139 *********** 2025-06-22 20:04:09.850686 | orchestrator | =============================================================================== 2025-06-22 20:04:09.850698 | orchestrator | Write ceph keys to the share directory --------------------------------- 12.20s 2025-06-22 20:04:09.850711 | orchestrator | Write ceph keys to the configuration directory -------------------------- 5.85s 2025-06-22 20:04:09.850724 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 3.97s 2025-06-22 20:04:09.850736 | orchestrator | Create share directory -------------------------------------------------- 0.96s 2025-06-22 20:04:09.850748 | orchestrator | 2025-06-22 20:04:09.850761 | orchestrator | 2025-06-22 20:04:09.850773 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 20:04:09.850785 | orchestrator | 2025-06-22 20:04:09.850808 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 20:04:09.850821 | orchestrator | Sunday 22 June 2025 20:02:27 +0000 (0:00:00.229) 0:00:00.229 *********** 2025-06-22 20:04:09.850833 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:04:09.850846 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:04:09.850860 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:04:09.850872 | orchestrator | 2025-06-22 20:04:09.850894 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 20:04:09.850914 | orchestrator | Sunday 22 June 2025 20:02:27 +0000 (0:00:00.244) 0:00:00.474 *********** 2025-06-22 20:04:09.850933 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-06-22 20:04:09.850952 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-06-22 20:04:09.850963 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-06-22 20:04:09.850974 | orchestrator | 2025-06-22 20:04:09.850984 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-06-22 20:04:09.850995 | orchestrator | 2025-06-22 20:04:09.851006 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-06-22 20:04:09.851017 | orchestrator | Sunday 22 June 2025 20:02:28 +0000 (0:00:00.348) 0:00:00.823 *********** 2025-06-22 20:04:09.851028 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:04:09.851039 | orchestrator | 2025-06-22 20:04:09.851050 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-06-22 20:04:09.851060 | orchestrator | Sunday 22 June 2025 20:02:28 +0000 (0:00:00.455) 0:00:01.278 *********** 2025-06-22 20:04:09.851077 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-22 20:04:09.851114 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-22 20:04:09.851136 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-22 20:04:09.851183 | orchestrator | 2025-06-22 20:04:09.851202 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-06-22 20:04:09.851219 | orchestrator | Sunday 22 June 2025 20:02:29 +0000 (0:00:01.282) 0:00:02.561 *********** 2025-06-22 20:04:09.851231 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:04:09.851242 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:04:09.851253 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:04:09.851264 | orchestrator | 2025-06-22 20:04:09.851275 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-06-22 20:04:09.851294 | orchestrator | Sunday 22 June 2025 20:02:30 +0000 (0:00:00.375) 0:00:02.936 *********** 2025-06-22 20:04:09.851305 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-06-22 20:04:09.851323 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2025-06-22 20:04:09.851335 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-06-22 20:04:09.851346 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-06-22 20:04:09.851363 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-06-22 20:04:09.851374 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-06-22 20:04:09.851385 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-06-22 20:04:09.851396 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-06-22 20:04:09.851407 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-06-22 20:04:09.851418 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2025-06-22 20:04:09.851428 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-06-22 20:04:09.851439 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-06-22 20:04:09.851450 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-06-22 20:04:09.851461 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-06-22 20:04:09.851472 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-06-22 20:04:09.851483 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-06-22 20:04:09.851493 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-06-22 20:04:09.851504 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2025-06-22 20:04:09.851515 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-06-22 20:04:09.851526 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-06-22 20:04:09.851537 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-06-22 20:04:09.851547 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-06-22 20:04:09.851558 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-06-22 20:04:09.851569 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-06-22 20:04:09.851581 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-06-22 20:04:09.851593 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-06-22 20:04:09.851604 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-06-22 20:04:09.851615 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-06-22 20:04:09.851626 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-06-22 20:04:09.851637 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-06-22 20:04:09.851648 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-06-22 20:04:09.851665 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-06-22 20:04:09.851676 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-06-22 20:04:09.851686 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-06-22 20:04:09.851697 | orchestrator | 2025-06-22 20:04:09.851708 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-22 20:04:09.851719 | orchestrator | Sunday 22 June 2025 20:02:31 +0000 (0:00:00.705) 0:00:03.642 *********** 2025-06-22 20:04:09.851730 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:04:09.851742 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:04:09.851752 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:04:09.851763 | orchestrator | 2025-06-22 20:04:09.851774 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-22 20:04:09.851785 | orchestrator | Sunday 22 June 2025 20:02:31 +0000 (0:00:00.295) 0:00:03.937 *********** 2025-06-22 20:04:09.851796 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:04:09.851807 | orchestrator | 2025-06-22 20:04:09.851823 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-22 20:04:09.851835 | orchestrator | Sunday 22 June 2025 20:02:31 +0000 (0:00:00.125) 0:00:04.062 *********** 2025-06-22 20:04:09.851846 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:04:09.851857 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:04:09.851867 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:04:09.851878 | orchestrator | 2025-06-22 20:04:09.851893 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-22 20:04:09.851904 | orchestrator | Sunday 22 June 2025 20:02:31 +0000 (0:00:00.353) 0:00:04.416 *********** 2025-06-22 20:04:09.851915 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:04:09.851926 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:04:09.851937 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:04:09.851948 | orchestrator | 2025-06-22 20:04:09.851959 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-22 20:04:09.851971 | orchestrator | Sunday 22 June 2025 20:02:32 +0000 (0:00:00.262) 0:00:04.678 *********** 2025-06-22 20:04:09.851982 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:04:09.851992 | orchestrator | 2025-06-22 20:04:09.852003 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-22 20:04:09.852014 | orchestrator | Sunday 22 June 2025 20:02:32 +0000 (0:00:00.120) 0:00:04.798 *********** 2025-06-22 20:04:09.852025 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:04:09.852036 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:04:09.852047 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:04:09.852057 | orchestrator | 2025-06-22 20:04:09.852068 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-22 20:04:09.852080 | orchestrator | Sunday 22 June 2025 20:02:32 +0000 (0:00:00.263) 0:00:05.062 *********** 2025-06-22 20:04:09.852090 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:04:09.852101 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:04:09.852112 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:04:09.852123 | orchestrator | 2025-06-22 20:04:09.852134 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-22 20:04:09.852187 | orchestrator | Sunday 22 June 2025 20:02:32 +0000 (0:00:00.258) 0:00:05.321 *********** 2025-06-22 20:04:09.852199 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:04:09.852209 | orchestrator | 2025-06-22 20:04:09.852220 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-22 20:04:09.852231 | orchestrator | Sunday 22 June 2025 20:02:32 +0000 (0:00:00.253) 0:00:05.575 *********** 2025-06-22 20:04:09.852249 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:04:09.852260 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:04:09.852287 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:04:09.852298 | orchestrator | 2025-06-22 20:04:09.852309 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-22 20:04:09.852320 | orchestrator | Sunday 22 June 2025 20:02:33 +0000 (0:00:00.279) 0:00:05.854 *********** 2025-06-22 20:04:09.852331 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:04:09.852342 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:04:09.852353 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:04:09.852364 | orchestrator | 2025-06-22 20:04:09.852375 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-22 20:04:09.852386 | orchestrator | Sunday 22 June 2025 20:02:33 +0000 (0:00:00.271) 0:00:06.126 *********** 2025-06-22 20:04:09.852397 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:04:09.852408 | orchestrator | 2025-06-22 20:04:09.852419 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-22 20:04:09.852430 | orchestrator | Sunday 22 June 2025 20:02:33 +0000 (0:00:00.145) 0:00:06.271 *********** 2025-06-22 20:04:09.852441 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:04:09.852451 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:04:09.852462 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:04:09.852473 | orchestrator | 2025-06-22 20:04:09.852484 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-22 20:04:09.852495 | orchestrator | Sunday 22 June 2025 20:02:33 +0000 (0:00:00.258) 0:00:06.529 *********** 2025-06-22 20:04:09.852506 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:04:09.852517 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:04:09.852527 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:04:09.852538 | orchestrator | 2025-06-22 20:04:09.852550 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-22 20:04:09.852560 | orchestrator | Sunday 22 June 2025 20:02:34 +0000 (0:00:00.483) 0:00:07.013 *********** 2025-06-22 20:04:09.852571 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:04:09.852582 | orchestrator | 2025-06-22 20:04:09.852593 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-22 20:04:09.852604 | orchestrator | Sunday 22 June 2025 20:02:34 +0000 (0:00:00.114) 0:00:07.127 *********** 2025-06-22 20:04:09.852615 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:04:09.852626 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:04:09.852636 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:04:09.852647 | orchestrator | 2025-06-22 20:04:09.852658 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-22 20:04:09.852669 | orchestrator | Sunday 22 June 2025 20:02:34 +0000 (0:00:00.266) 0:00:07.393 *********** 2025-06-22 20:04:09.852680 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:04:09.852691 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:04:09.852701 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:04:09.852714 | orchestrator | 2025-06-22 20:04:09.852725 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-22 20:04:09.852736 | orchestrator | Sunday 22 June 2025 20:02:35 +0000 (0:00:00.296) 0:00:07.690 *********** 2025-06-22 20:04:09.852746 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:04:09.852757 | orchestrator | 2025-06-22 20:04:09.852768 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-22 20:04:09.852779 | orchestrator | Sunday 22 June 2025 20:02:35 +0000 (0:00:00.115) 0:00:07.805 *********** 2025-06-22 20:04:09.852790 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:04:09.852801 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:04:09.852811 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:04:09.852822 | orchestrator | 2025-06-22 20:04:09.852833 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-22 20:04:09.852844 | orchestrator | Sunday 22 June 2025 20:02:35 +0000 (0:00:00.361) 0:00:08.167 *********** 2025-06-22 20:04:09.852855 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:04:09.852881 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:04:09.852892 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:04:09.852903 | orchestrator | 2025-06-22 20:04:09.852914 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-22 20:04:09.852925 | orchestrator | Sunday 22 June 2025 20:02:35 +0000 (0:00:00.277) 0:00:08.445 *********** 2025-06-22 20:04:09.852936 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:04:09.852947 | orchestrator | 2025-06-22 20:04:09.852963 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-22 20:04:09.852975 | orchestrator | Sunday 22 June 2025 20:02:35 +0000 (0:00:00.118) 0:00:08.563 *********** 2025-06-22 20:04:09.852994 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:04:09.853014 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:04:09.853032 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:04:09.853044 | orchestrator | 2025-06-22 20:04:09.853055 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-22 20:04:09.853066 | orchestrator | Sunday 22 June 2025 20:02:36 +0000 (0:00:00.246) 0:00:08.810 *********** 2025-06-22 20:04:09.853076 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:04:09.853087 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:04:09.853098 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:04:09.853109 | orchestrator | 2025-06-22 20:04:09.853120 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-22 20:04:09.853131 | orchestrator | Sunday 22 June 2025 20:02:36 +0000 (0:00:00.284) 0:00:09.095 *********** 2025-06-22 20:04:09.853188 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:04:09.853201 | orchestrator | 2025-06-22 20:04:09.853212 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-22 20:04:09.853223 | orchestrator | Sunday 22 June 2025 20:02:36 +0000 (0:00:00.108) 0:00:09.204 *********** 2025-06-22 20:04:09.853234 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:04:09.853245 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:04:09.853256 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:04:09.853266 | orchestrator | 2025-06-22 20:04:09.853277 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-22 20:04:09.853288 | orchestrator | Sunday 22 June 2025 20:02:36 +0000 (0:00:00.368) 0:00:09.572 *********** 2025-06-22 20:04:09.853299 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:04:09.853310 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:04:09.853321 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:04:09.853332 | orchestrator | 2025-06-22 20:04:09.853342 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-22 20:04:09.853353 | orchestrator | Sunday 22 June 2025 20:02:37 +0000 (0:00:00.306) 0:00:09.878 *********** 2025-06-22 20:04:09.853364 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:04:09.853375 | orchestrator | 2025-06-22 20:04:09.853386 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-22 20:04:09.853397 | orchestrator | Sunday 22 June 2025 20:02:37 +0000 (0:00:00.112) 0:00:09.990 *********** 2025-06-22 20:04:09.853408 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:04:09.853419 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:04:09.853430 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:04:09.853440 | orchestrator | 2025-06-22 20:04:09.853452 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-06-22 20:04:09.853462 | orchestrator | Sunday 22 June 2025 20:02:37 +0000 (0:00:00.244) 0:00:10.235 *********** 2025-06-22 20:04:09.853473 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:04:09.853484 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:04:09.853495 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:04:09.853505 | orchestrator | 2025-06-22 20:04:09.853516 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-06-22 20:04:09.853527 | orchestrator | Sunday 22 June 2025 20:02:38 +0000 (0:00:00.388) 0:00:10.623 *********** 2025-06-22 20:04:09.853538 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:04:09.853549 | orchestrator | 2025-06-22 20:04:09.853568 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-06-22 20:04:09.853579 | orchestrator | Sunday 22 June 2025 20:02:38 +0000 (0:00:00.114) 0:00:10.737 *********** 2025-06-22 20:04:09.853590 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:04:09.853601 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:04:09.853612 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:04:09.853623 | orchestrator | 2025-06-22 20:04:09.853633 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-06-22 20:04:09.853645 | orchestrator | Sunday 22 June 2025 20:02:38 +0000 (0:00:00.262) 0:00:11.000 *********** 2025-06-22 20:04:09.853656 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:04:09.853666 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:04:09.853677 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:04:09.853688 | orchestrator | 2025-06-22 20:04:09.853699 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-06-22 20:04:09.853710 | orchestrator | Sunday 22 June 2025 20:02:39 +0000 (0:00:01.431) 0:00:12.431 *********** 2025-06-22 20:04:09.853720 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-06-22 20:04:09.853731 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-06-22 20:04:09.853742 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-06-22 20:04:09.853753 | orchestrator | 2025-06-22 20:04:09.853764 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-06-22 20:04:09.853775 | orchestrator | Sunday 22 June 2025 20:02:41 +0000 (0:00:01.602) 0:00:14.033 *********** 2025-06-22 20:04:09.853786 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-06-22 20:04:09.853797 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-06-22 20:04:09.853808 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-06-22 20:04:09.853819 | orchestrator | 2025-06-22 20:04:09.853830 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-06-22 20:04:09.853849 | orchestrator | Sunday 22 June 2025 20:02:43 +0000 (0:00:02.089) 0:00:16.123 *********** 2025-06-22 20:04:09.853860 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-06-22 20:04:09.853871 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-06-22 20:04:09.853892 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-06-22 20:04:09.853903 | orchestrator | 2025-06-22 20:04:09.853914 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-06-22 20:04:09.853925 | orchestrator | Sunday 22 June 2025 20:02:45 +0000 (0:00:01.766) 0:00:17.889 *********** 2025-06-22 20:04:09.853936 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:04:09.853947 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:04:09.853958 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:04:09.853969 | orchestrator | 2025-06-22 20:04:09.853980 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-06-22 20:04:09.853991 | orchestrator | Sunday 22 June 2025 20:02:45 +0000 (0:00:00.304) 0:00:18.194 *********** 2025-06-22 20:04:09.854002 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:04:09.854012 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:04:09.854068 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:04:09.854079 | orchestrator | 2025-06-22 20:04:09.854090 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-06-22 20:04:09.854101 | orchestrator | Sunday 22 June 2025 20:02:45 +0000 (0:00:00.282) 0:00:18.476 *********** 2025-06-22 20:04:09.854112 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:04:09.854130 | orchestrator | 2025-06-22 20:04:09.854163 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-06-22 20:04:09.854175 | orchestrator | Sunday 22 June 2025 20:02:46 +0000 (0:00:00.791) 0:00:19.267 *********** 2025-06-22 20:04:09.854188 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-22 20:04:09.854219 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-22 20:04:09.854239 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-22 20:04:09.854252 | orchestrator | 2025-06-22 20:04:09.854263 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-06-22 20:04:09.854273 | orchestrator | Sunday 22 June 2025 20:02:48 +0000 (0:00:01.671) 0:00:20.939 *********** 2025-06-22 20:04:09.854299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-22 20:04:09.854318 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:04:09.854337 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-22 20:04:09.854349 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:04:09.854366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-22 20:04:09.854384 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:04:09.854395 | orchestrator | 2025-06-22 20:04:09.854407 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-06-22 20:04:09.854418 | orchestrator | Sunday 22 June 2025 20:02:49 +0000 (0:00:00.951) 0:00:21.890 *********** 2025-06-22 20:04:09.854443 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-22 20:04:09.854467 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:04:09.854479 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-22 20:04:09.854492 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:04:09.854516 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-06-22 20:04:09.854535 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:04:09.854546 | orchestrator | 2025-06-22 20:04:09.854558 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-06-22 20:04:09.854569 | orchestrator | Sunday 22 June 2025 20:02:50 +0000 (0:00:01.078) 0:00:22.969 *********** 2025-06-22 20:04:09.854580 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-22 20:04:09.854606 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-22 20:04:09.854625 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250530', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-06-22 20:04:09.854638 | orchestrator | 2025-06-22 20:04:09.854649 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-06-22 20:04:09.854660 | orchestrator | Sunday 22 June 2025 20:02:51 +0000 (0:00:01.145) 0:00:24.114 *********** 2025-06-22 20:04:09.854671 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:04:09.854682 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:04:09.854692 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:04:09.854703 | orchestrator | 2025-06-22 20:04:09.854714 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-06-22 20:04:09.854725 | orchestrator | Sunday 22 June 2025 20:02:51 +0000 (0:00:00.335) 0:00:24.449 *********** 2025-06-22 20:04:09.854742 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:04:09.854759 | orchestrator | 2025-06-22 20:04:09.854770 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-06-22 20:04:09.854781 | orchestrator | Sunday 22 June 2025 20:02:52 +0000 (0:00:00.685) 0:00:25.135 *********** 2025-06-22 20:04:09.854791 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:04:09.854802 | orchestrator | 2025-06-22 20:04:09.854821 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-06-22 20:04:09.854832 | orchestrator | Sunday 22 June 2025 20:02:54 +0000 (0:00:02.328) 0:00:27.463 *********** 2025-06-22 20:04:09.854843 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:04:09.854854 | orchestrator | 2025-06-22 20:04:09.854865 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-06-22 20:04:09.854876 | orchestrator | Sunday 22 June 2025 20:02:57 +0000 (0:00:02.169) 0:00:29.633 *********** 2025-06-22 20:04:09.854887 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:04:09.854898 | orchestrator | 2025-06-22 20:04:09.854909 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-06-22 20:04:09.854920 | orchestrator | Sunday 22 June 2025 20:03:12 +0000 (0:00:15.704) 0:00:45.337 *********** 2025-06-22 20:04:09.854931 | orchestrator | 2025-06-22 20:04:09.854942 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-06-22 20:04:09.854952 | orchestrator | Sunday 22 June 2025 20:03:12 +0000 (0:00:00.057) 0:00:45.395 *********** 2025-06-22 20:04:09.854963 | orchestrator | 2025-06-22 20:04:09.854974 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-06-22 20:04:09.854985 | orchestrator | Sunday 22 June 2025 20:03:12 +0000 (0:00:00.057) 0:00:45.452 *********** 2025-06-22 20:04:09.854996 | orchestrator | 2025-06-22 20:04:09.855007 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-06-22 20:04:09.855018 | orchestrator | Sunday 22 June 2025 20:03:12 +0000 (0:00:00.058) 0:00:45.511 *********** 2025-06-22 20:04:09.855029 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:04:09.855040 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:04:09.855051 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:04:09.855061 | orchestrator | 2025-06-22 20:04:09.855072 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 20:04:09.855083 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2025-06-22 20:04:09.855095 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-06-22 20:04:09.855106 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-06-22 20:04:09.855117 | orchestrator | 2025-06-22 20:04:09.855128 | orchestrator | 2025-06-22 20:04:09.855160 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 20:04:09.855181 | orchestrator | Sunday 22 June 2025 20:04:08 +0000 (0:00:55.247) 0:01:40.759 *********** 2025-06-22 20:04:09.855199 | orchestrator | =============================================================================== 2025-06-22 20:04:09.855217 | orchestrator | horizon : Restart horizon container ------------------------------------ 55.25s 2025-06-22 20:04:09.855228 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 15.70s 2025-06-22 20:04:09.855239 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.33s 2025-06-22 20:04:09.855250 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.17s 2025-06-22 20:04:09.855261 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.09s 2025-06-22 20:04:09.855271 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.77s 2025-06-22 20:04:09.855282 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.67s 2025-06-22 20:04:09.855292 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.60s 2025-06-22 20:04:09.855315 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.43s 2025-06-22 20:04:09.855326 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.28s 2025-06-22 20:04:09.855336 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.15s 2025-06-22 20:04:09.855347 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.08s 2025-06-22 20:04:09.855358 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.95s 2025-06-22 20:04:09.855369 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.79s 2025-06-22 20:04:09.855380 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.71s 2025-06-22 20:04:09.855390 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.69s 2025-06-22 20:04:09.855401 | orchestrator | horizon : Update policy file name --------------------------------------- 0.48s 2025-06-22 20:04:09.855412 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.46s 2025-06-22 20:04:09.855422 | orchestrator | horizon : Update policy file name --------------------------------------- 0.39s 2025-06-22 20:04:09.855433 | orchestrator | horizon : Set empty custom policy --------------------------------------- 0.38s 2025-06-22 20:04:09.855444 | orchestrator | 2025-06-22 20:04:09 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:04:12.893446 | orchestrator | 2025-06-22 20:04:12 | INFO  | Task 7d021911-b951-41a1-a942-7697e622caf3 is in state STARTED 2025-06-22 20:04:12.894914 | orchestrator | 2025-06-22 20:04:12 | INFO  | Task 37e6b566-9d3f-4dc6-821f-05d6da068b54 is in state STARTED 2025-06-22 20:04:12.894985 | orchestrator | 2025-06-22 20:04:12 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:04:15.945687 | orchestrator | 2025-06-22 20:04:15 | INFO  | Task 7d021911-b951-41a1-a942-7697e622caf3 is in state STARTED 2025-06-22 20:04:15.947066 | orchestrator | 2025-06-22 20:04:15 | INFO  | Task 37e6b566-9d3f-4dc6-821f-05d6da068b54 is in state STARTED 2025-06-22 20:04:15.947095 | orchestrator | 2025-06-22 20:04:15 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:04:18.990211 | orchestrator | 2025-06-22 20:04:18 | INFO  | Task 7d021911-b951-41a1-a942-7697e622caf3 is in state STARTED 2025-06-22 20:04:18.991370 | orchestrator | 2025-06-22 20:04:18 | INFO  | Task 37e6b566-9d3f-4dc6-821f-05d6da068b54 is in state STARTED 2025-06-22 20:04:18.991398 | orchestrator | 2025-06-22 20:04:18 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:04:22.033854 | orchestrator | 2025-06-22 20:04:22 | INFO  | Task 7d021911-b951-41a1-a942-7697e622caf3 is in state STARTED 2025-06-22 20:04:22.033946 | orchestrator | 2025-06-22 20:04:22 | INFO  | Task 37e6b566-9d3f-4dc6-821f-05d6da068b54 is in state STARTED 2025-06-22 20:04:22.033960 | orchestrator | 2025-06-22 20:04:22 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:04:25.075591 | orchestrator | 2025-06-22 20:04:25 | INFO  | Task 7d021911-b951-41a1-a942-7697e622caf3 is in state STARTED 2025-06-22 20:04:25.077191 | orchestrator | 2025-06-22 20:04:25 | INFO  | Task 37e6b566-9d3f-4dc6-821f-05d6da068b54 is in state STARTED 2025-06-22 20:04:25.077260 | orchestrator | 2025-06-22 20:04:25 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:04:28.124973 | orchestrator | 2025-06-22 20:04:28 | INFO  | Task 7d021911-b951-41a1-a942-7697e622caf3 is in state STARTED 2025-06-22 20:04:28.127629 | orchestrator | 2025-06-22 20:04:28 | INFO  | Task 37e6b566-9d3f-4dc6-821f-05d6da068b54 is in state STARTED 2025-06-22 20:04:28.127721 | orchestrator | 2025-06-22 20:04:28 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:04:31.177894 | orchestrator | 2025-06-22 20:04:31 | INFO  | Task 7d021911-b951-41a1-a942-7697e622caf3 is in state STARTED 2025-06-22 20:04:31.181291 | orchestrator | 2025-06-22 20:04:31 | INFO  | Task 37e6b566-9d3f-4dc6-821f-05d6da068b54 is in state STARTED 2025-06-22 20:04:31.181328 | orchestrator | 2025-06-22 20:04:31 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:04:34.228253 | orchestrator | 2025-06-22 20:04:34 | INFO  | Task 7d021911-b951-41a1-a942-7697e622caf3 is in state STARTED 2025-06-22 20:04:34.231173 | orchestrator | 2025-06-22 20:04:34 | INFO  | Task 37e6b566-9d3f-4dc6-821f-05d6da068b54 is in state STARTED 2025-06-22 20:04:34.232238 | orchestrator | 2025-06-22 20:04:34 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:04:37.267575 | orchestrator | 2025-06-22 20:04:37 | INFO  | Task 7d021911-b951-41a1-a942-7697e622caf3 is in state STARTED 2025-06-22 20:04:37.268482 | orchestrator | 2025-06-22 20:04:37 | INFO  | Task 37e6b566-9d3f-4dc6-821f-05d6da068b54 is in state STARTED 2025-06-22 20:04:37.268516 | orchestrator | 2025-06-22 20:04:37 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:04:40.321889 | orchestrator | 2025-06-22 20:04:40 | INFO  | Task 7d021911-b951-41a1-a942-7697e622caf3 is in state STARTED 2025-06-22 20:04:40.323424 | orchestrator | 2025-06-22 20:04:40 | INFO  | Task 37e6b566-9d3f-4dc6-821f-05d6da068b54 is in state STARTED 2025-06-22 20:04:40.323463 | orchestrator | 2025-06-22 20:04:40 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:04:43.370278 | orchestrator | 2025-06-22 20:04:43 | INFO  | Task 7d021911-b951-41a1-a942-7697e622caf3 is in state STARTED 2025-06-22 20:04:43.371919 | orchestrator | 2025-06-22 20:04:43 | INFO  | Task 37e6b566-9d3f-4dc6-821f-05d6da068b54 is in state STARTED 2025-06-22 20:04:43.371951 | orchestrator | 2025-06-22 20:04:43 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:04:46.406300 | orchestrator | 2025-06-22 20:04:46 | INFO  | Task 7d021911-b951-41a1-a942-7697e622caf3 is in state STARTED 2025-06-22 20:04:46.410161 | orchestrator | 2025-06-22 20:04:46 | INFO  | Task 37e6b566-9d3f-4dc6-821f-05d6da068b54 is in state STARTED 2025-06-22 20:04:46.410253 | orchestrator | 2025-06-22 20:04:46 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:04:49.456660 | orchestrator | 2025-06-22 20:04:49 | INFO  | Task 7d021911-b951-41a1-a942-7697e622caf3 is in state STARTED 2025-06-22 20:04:49.457953 | orchestrator | 2025-06-22 20:04:49 | INFO  | Task 37e6b566-9d3f-4dc6-821f-05d6da068b54 is in state STARTED 2025-06-22 20:04:49.458003 | orchestrator | 2025-06-22 20:04:49 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:04:52.506715 | orchestrator | 2025-06-22 20:04:52 | INFO  | Task 7d021911-b951-41a1-a942-7697e622caf3 is in state STARTED 2025-06-22 20:04:52.508526 | orchestrator | 2025-06-22 20:04:52 | INFO  | Task 37e6b566-9d3f-4dc6-821f-05d6da068b54 is in state STARTED 2025-06-22 20:04:52.508568 | orchestrator | 2025-06-22 20:04:52 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:04:55.559364 | orchestrator | 2025-06-22 20:04:55 | INFO  | Task 7d021911-b951-41a1-a942-7697e622caf3 is in state STARTED 2025-06-22 20:04:55.560590 | orchestrator | 2025-06-22 20:04:55 | INFO  | Task 37e6b566-9d3f-4dc6-821f-05d6da068b54 is in state STARTED 2025-06-22 20:04:55.560822 | orchestrator | 2025-06-22 20:04:55 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:04:58.609915 | orchestrator | 2025-06-22 20:04:58 | INFO  | Task 7d021911-b951-41a1-a942-7697e622caf3 is in state STARTED 2025-06-22 20:04:58.610126 | orchestrator | 2025-06-22 20:04:58 | INFO  | Task 37e6b566-9d3f-4dc6-821f-05d6da068b54 is in state STARTED 2025-06-22 20:04:58.610148 | orchestrator | 2025-06-22 20:04:58 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:05:01.664206 | orchestrator | 2025-06-22 20:05:01 | INFO  | Task 7d021911-b951-41a1-a942-7697e622caf3 is in state STARTED 2025-06-22 20:05:01.664320 | orchestrator | 2025-06-22 20:05:01 | INFO  | Task 37e6b566-9d3f-4dc6-821f-05d6da068b54 is in state STARTED 2025-06-22 20:05:01.664335 | orchestrator | 2025-06-22 20:05:01 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:05:04.708757 | orchestrator | 2025-06-22 20:05:04 | INFO  | Task e5c0f822-ce6c-444c-a2bf-938e94a4477c is in state STARTED 2025-06-22 20:05:04.708868 | orchestrator | 2025-06-22 20:05:04 | INFO  | Task 7d021911-b951-41a1-a942-7697e622caf3 is in state STARTED 2025-06-22 20:05:04.708884 | orchestrator | 2025-06-22 20:05:04 | INFO  | Task 4a0f0d72-b0bb-4180-a478-51f31e6edf32 is in state STARTED 2025-06-22 20:05:04.711544 | orchestrator | 2025-06-22 20:05:04.711599 | orchestrator | 2025-06-22 20:05:04 | INFO  | Task 37e6b566-9d3f-4dc6-821f-05d6da068b54 is in state SUCCESS 2025-06-22 20:05:04.713443 | orchestrator | 2025-06-22 20:05:04.713527 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 20:05:04.713571 | orchestrator | 2025-06-22 20:05:04.713585 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 20:05:04.713598 | orchestrator | Sunday 22 June 2025 20:02:27 +0000 (0:00:00.228) 0:00:00.228 *********** 2025-06-22 20:05:04.713609 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:05:04.713621 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:05:04.713633 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:05:04.713644 | orchestrator | 2025-06-22 20:05:04.713655 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 20:05:04.713705 | orchestrator | Sunday 22 June 2025 20:02:27 +0000 (0:00:00.260) 0:00:00.489 *********** 2025-06-22 20:05:04.713717 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-06-22 20:05:04.713728 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-06-22 20:05:04.713739 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-06-22 20:05:04.713750 | orchestrator | 2025-06-22 20:05:04.713762 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-06-22 20:05:04.713773 | orchestrator | 2025-06-22 20:05:04.713784 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-22 20:05:04.713795 | orchestrator | Sunday 22 June 2025 20:02:28 +0000 (0:00:00.368) 0:00:00.857 *********** 2025-06-22 20:05:04.713806 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:05:04.713818 | orchestrator | 2025-06-22 20:05:04.713830 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-06-22 20:05:04.713841 | orchestrator | Sunday 22 June 2025 20:02:28 +0000 (0:00:00.501) 0:00:01.358 *********** 2025-06-22 20:05:04.713858 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-22 20:05:04.713891 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-22 20:05:04.713973 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-22 20:05:04.714109 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-22 20:05:04.714125 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-22 20:05:04.714138 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-22 20:05:04.714158 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-22 20:05:04.714182 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-22 20:05:04.714195 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-22 20:05:04.714209 | orchestrator | 2025-06-22 20:05:04.714222 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-06-22 20:05:04.714245 | orchestrator | Sunday 22 June 2025 20:02:30 +0000 (0:00:01.750) 0:00:03.108 *********** 2025-06-22 20:05:04.714257 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-06-22 20:05:04.714269 | orchestrator | 2025-06-22 20:05:04.714280 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-06-22 20:05:04.714291 | orchestrator | Sunday 22 June 2025 20:02:31 +0000 (0:00:00.853) 0:00:03.962 *********** 2025-06-22 20:05:04.714302 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:05:04.714314 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:05:04.714325 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:05:04.714335 | orchestrator | 2025-06-22 20:05:04.714346 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-06-22 20:05:04.714358 | orchestrator | Sunday 22 June 2025 20:02:31 +0000 (0:00:00.389) 0:00:04.352 *********** 2025-06-22 20:05:04.714369 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-22 20:05:04.714380 | orchestrator | 2025-06-22 20:05:04.714391 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-22 20:05:04.714402 | orchestrator | Sunday 22 June 2025 20:02:32 +0000 (0:00:00.612) 0:00:04.964 *********** 2025-06-22 20:05:04.714414 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:05:04.714424 | orchestrator | 2025-06-22 20:05:04.714435 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-06-22 20:05:04.714446 | orchestrator | Sunday 22 June 2025 20:02:32 +0000 (0:00:00.469) 0:00:05.434 *********** 2025-06-22 20:05:04.714458 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-22 20:05:04.714483 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-22 20:05:04.714503 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-22 20:05:04.714516 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-22 20:05:04.714528 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-22 20:05:04.714546 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-22 20:05:04.714563 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-22 20:05:04.714575 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-22 20:05:04.714586 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-22 20:05:04.714598 | orchestrator | 2025-06-22 20:05:04.714610 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-06-22 20:05:04.714621 | orchestrator | Sunday 22 June 2025 20:02:36 +0000 (0:00:03.396) 0:00:08.831 *********** 2025-06-22 20:05:04.714640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-22 20:05:04.714653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-22 20:05:04.714670 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-22 20:05:04.714683 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:05:04.714700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-22 20:05:04.714713 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-22 20:05:04.714734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-22 20:05:04.714745 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:05:04.714757 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-22 20:05:04.714775 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-22 20:05:04.714800 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-22 20:05:04.714812 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:05:04.714823 | orchestrator | 2025-06-22 20:05:04.714834 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-06-22 20:05:04.714846 | orchestrator | Sunday 22 June 2025 20:02:36 +0000 (0:00:00.522) 0:00:09.354 *********** 2025-06-22 20:05:04.714857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-22 20:05:04.714876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-22 20:05:04.714888 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-22 20:05:04.714906 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:05:04.714918 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-22 20:05:04.714935 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-22 20:05:04.714947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-22 20:05:04.714958 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:05:04.714977 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-06-22 20:05:04.714999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-22 20:05:04.715011 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-06-22 20:05:04.715022 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:05:04.715033 | orchestrator | 2025-06-22 20:05:04.715044 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-06-22 20:05:04.715056 | orchestrator | Sunday 22 June 2025 20:02:37 +0000 (0:00:00.672) 0:00:10.026 *********** 2025-06-22 20:05:04.715072 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-22 20:05:04.715101 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-22 20:05:04.715122 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-22 20:05:04.715141 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-22 20:05:04.715153 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-22 20:05:04.715169 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-22 20:05:04.715181 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-22 20:05:04.715192 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-22 20:05:04.715211 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-22 20:05:04.715229 | orchestrator | 2025-06-22 20:05:04.715241 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-06-22 20:05:04.715252 | orchestrator | Sunday 22 June 2025 20:02:40 +0000 (0:00:03.110) 0:00:13.137 *********** 2025-06-22 20:05:04.715264 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-22 20:05:04.715276 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-22 20:05:04.715288 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-22 20:05:04.715300 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-22 20:05:04.715325 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-22 20:05:04.715376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-22 20:05:04.715389 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-22 20:05:04.715405 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-22 20:05:04.715417 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-22 20:05:04.715428 | orchestrator | 2025-06-22 20:05:04.715439 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-06-22 20:05:04.715451 | orchestrator | Sunday 22 June 2025 20:02:45 +0000 (0:00:04.953) 0:00:18.090 *********** 2025-06-22 20:05:04.715469 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:05:04.715480 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:05:04.715492 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:05:04.715502 | orchestrator | 2025-06-22 20:05:04.715513 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-06-22 20:05:04.715524 | orchestrator | Sunday 22 June 2025 20:02:46 +0000 (0:00:01.305) 0:00:19.396 *********** 2025-06-22 20:05:04.715535 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:05:04.715546 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:05:04.715557 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:05:04.715568 | orchestrator | 2025-06-22 20:05:04.715586 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-06-22 20:05:04.715597 | orchestrator | Sunday 22 June 2025 20:02:47 +0000 (0:00:00.641) 0:00:20.037 *********** 2025-06-22 20:05:04.715608 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:05:04.715619 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:05:04.715630 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:05:04.715641 | orchestrator | 2025-06-22 20:05:04.715652 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-06-22 20:05:04.715663 | orchestrator | Sunday 22 June 2025 20:02:47 +0000 (0:00:00.481) 0:00:20.519 *********** 2025-06-22 20:05:04.715673 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:05:04.715684 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:05:04.715695 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:05:04.715706 | orchestrator | 2025-06-22 20:05:04.715717 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-06-22 20:05:04.715728 | orchestrator | Sunday 22 June 2025 20:02:48 +0000 (0:00:00.303) 0:00:20.823 *********** 2025-06-22 20:05:04.715740 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-22 20:05:04.715752 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-22 20:05:04.715770 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-22 20:05:04.715788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-22 20:05:04.715808 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-22 20:05:04.715820 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-06-22 20:05:04.715832 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-22 20:05:04.715848 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-22 20:05:04.715869 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-22 20:05:04.715881 | orchestrator | 2025-06-22 20:05:04.715900 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-22 20:05:04.715918 | orchestrator | Sunday 22 June 2025 20:02:50 +0000 (0:00:02.337) 0:00:23.160 *********** 2025-06-22 20:05:04.715936 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:05:04.715953 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:05:04.715970 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:05:04.715988 | orchestrator | 2025-06-22 20:05:04.716007 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-06-22 20:05:04.716025 | orchestrator | Sunday 22 June 2025 20:02:50 +0000 (0:00:00.296) 0:00:23.457 *********** 2025-06-22 20:05:04.716043 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-06-22 20:05:04.716062 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-06-22 20:05:04.716118 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-06-22 20:05:04.716139 | orchestrator | 2025-06-22 20:05:04.716158 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-06-22 20:05:04.716177 | orchestrator | Sunday 22 June 2025 20:02:52 +0000 (0:00:01.919) 0:00:25.376 *********** 2025-06-22 20:05:04.716193 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-22 20:05:04.716204 | orchestrator | 2025-06-22 20:05:04.716215 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-06-22 20:05:04.716226 | orchestrator | Sunday 22 June 2025 20:02:53 +0000 (0:00:00.891) 0:00:26.267 *********** 2025-06-22 20:05:04.716237 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:05:04.716247 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:05:04.716258 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:05:04.716269 | orchestrator | 2025-06-22 20:05:04.716280 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-06-22 20:05:04.716291 | orchestrator | Sunday 22 June 2025 20:02:54 +0000 (0:00:00.544) 0:00:26.812 *********** 2025-06-22 20:05:04.716302 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-06-22 20:05:04.716313 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-06-22 20:05:04.716324 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-22 20:05:04.716335 | orchestrator | 2025-06-22 20:05:04.716346 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-06-22 20:05:04.716357 | orchestrator | Sunday 22 June 2025 20:02:55 +0000 (0:00:01.071) 0:00:27.883 *********** 2025-06-22 20:05:04.716368 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:05:04.716379 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:05:04.716390 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:05:04.716400 | orchestrator | 2025-06-22 20:05:04.716411 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-06-22 20:05:04.716422 | orchestrator | Sunday 22 June 2025 20:02:55 +0000 (0:00:00.312) 0:00:28.196 *********** 2025-06-22 20:05:04.716433 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-06-22 20:05:04.716444 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-06-22 20:05:04.716455 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-06-22 20:05:04.716477 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-06-22 20:05:04.716488 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-06-22 20:05:04.716499 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-06-22 20:05:04.716511 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-06-22 20:05:04.716522 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-06-22 20:05:04.716533 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-06-22 20:05:04.716544 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-06-22 20:05:04.716554 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-06-22 20:05:04.716572 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-06-22 20:05:04.716583 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-06-22 20:05:04.716594 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-06-22 20:05:04.716605 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-06-22 20:05:04.716616 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-22 20:05:04.716627 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-22 20:05:04.716638 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-22 20:05:04.716649 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-22 20:05:04.716660 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-22 20:05:04.716671 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-22 20:05:04.716682 | orchestrator | 2025-06-22 20:05:04.716693 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-06-22 20:05:04.716703 | orchestrator | Sunday 22 June 2025 20:03:04 +0000 (0:00:08.859) 0:00:37.056 *********** 2025-06-22 20:05:04.716714 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-22 20:05:04.716725 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-22 20:05:04.716736 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-22 20:05:04.716747 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-22 20:05:04.716758 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-22 20:05:04.716776 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-22 20:05:04.716787 | orchestrator | 2025-06-22 20:05:04.716802 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-06-22 20:05:04.716820 | orchestrator | Sunday 22 June 2025 20:03:07 +0000 (0:00:02.563) 0:00:39.619 *********** 2025-06-22 20:05:04.716840 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-22 20:05:04.716871 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-22 20:05:04.716899 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-06-22 20:05:04.716922 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-22 20:05:04.716953 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-22 20:05:04.716972 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-06-22 20:05:04.716998 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-22 20:05:04.717010 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-22 20:05:04.717027 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-06-22 20:05:04.717038 | orchestrator | 2025-06-22 20:05:04.717050 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-22 20:05:04.717061 | orchestrator | Sunday 22 June 2025 20:03:09 +0000 (0:00:02.237) 0:00:41.856 *********** 2025-06-22 20:05:04.717072 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:05:04.717149 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:05:04.717161 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:05:04.717172 | orchestrator | 2025-06-22 20:05:04.717183 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-06-22 20:05:04.717194 | orchestrator | Sunday 22 June 2025 20:03:09 +0000 (0:00:00.279) 0:00:42.135 *********** 2025-06-22 20:05:04.717205 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:05:04.717216 | orchestrator | 2025-06-22 20:05:04.717227 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-06-22 20:05:04.717238 | orchestrator | Sunday 22 June 2025 20:03:11 +0000 (0:00:02.201) 0:00:44.337 *********** 2025-06-22 20:05:04.717249 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:05:04.717260 | orchestrator | 2025-06-22 20:05:04.717271 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-06-22 20:05:04.717282 | orchestrator | Sunday 22 June 2025 20:03:14 +0000 (0:00:02.393) 0:00:46.730 *********** 2025-06-22 20:05:04.717293 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:05:04.717304 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:05:04.717315 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:05:04.717334 | orchestrator | 2025-06-22 20:05:04.717345 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-06-22 20:05:04.717356 | orchestrator | Sunday 22 June 2025 20:03:14 +0000 (0:00:00.841) 0:00:47.572 *********** 2025-06-22 20:05:04.717367 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:05:04.717385 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:05:04.717397 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:05:04.717408 | orchestrator | 2025-06-22 20:05:04.717419 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-06-22 20:05:04.717430 | orchestrator | Sunday 22 June 2025 20:03:15 +0000 (0:00:00.280) 0:00:47.852 *********** 2025-06-22 20:05:04.717440 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:05:04.717451 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:05:04.717462 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:05:04.717473 | orchestrator | 2025-06-22 20:05:04.717484 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-06-22 20:05:04.717495 | orchestrator | Sunday 22 June 2025 20:03:15 +0000 (0:00:00.292) 0:00:48.144 *********** 2025-06-22 20:05:04.717506 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:05:04.717517 | orchestrator | 2025-06-22 20:05:04.717528 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-06-22 20:05:04.717539 | orchestrator | Sunday 22 June 2025 20:03:29 +0000 (0:00:13.739) 0:01:01.884 *********** 2025-06-22 20:05:04.717550 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:05:04.717561 | orchestrator | 2025-06-22 20:05:04.717572 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-06-22 20:05:04.717583 | orchestrator | Sunday 22 June 2025 20:03:39 +0000 (0:00:10.351) 0:01:12.235 *********** 2025-06-22 20:05:04.717594 | orchestrator | 2025-06-22 20:05:04.717604 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-06-22 20:05:04.717614 | orchestrator | Sunday 22 June 2025 20:03:39 +0000 (0:00:00.249) 0:01:12.485 *********** 2025-06-22 20:05:04.717623 | orchestrator | 2025-06-22 20:05:04.717633 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-06-22 20:05:04.717643 | orchestrator | Sunday 22 June 2025 20:03:39 +0000 (0:00:00.063) 0:01:12.549 *********** 2025-06-22 20:05:04.717652 | orchestrator | 2025-06-22 20:05:04.717662 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-06-22 20:05:04.717672 | orchestrator | Sunday 22 June 2025 20:03:40 +0000 (0:00:00.058) 0:01:12.607 *********** 2025-06-22 20:05:04.717682 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:05:04.717691 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:05:04.717701 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:05:04.717711 | orchestrator | 2025-06-22 20:05:04.717720 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-06-22 20:05:04.717730 | orchestrator | Sunday 22 June 2025 20:03:59 +0000 (0:00:19.050) 0:01:31.658 *********** 2025-06-22 20:05:04.717740 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:05:04.717750 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:05:04.717759 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:05:04.717769 | orchestrator | 2025-06-22 20:05:04.717779 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-06-22 20:05:04.717789 | orchestrator | Sunday 22 June 2025 20:04:06 +0000 (0:00:07.695) 0:01:39.354 *********** 2025-06-22 20:05:04.717799 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:05:04.717809 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:05:04.717818 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:05:04.717828 | orchestrator | 2025-06-22 20:05:04.717837 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-22 20:05:04.717847 | orchestrator | Sunday 22 June 2025 20:04:14 +0000 (0:00:07.853) 0:01:47.208 *********** 2025-06-22 20:05:04.717857 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:05:04.717867 | orchestrator | 2025-06-22 20:05:04.717877 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-06-22 20:05:04.717898 | orchestrator | Sunday 22 June 2025 20:04:15 +0000 (0:00:00.787) 0:01:47.995 *********** 2025-06-22 20:05:04.717908 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:05:04.717918 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:05:04.717927 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:05:04.717937 | orchestrator | 2025-06-22 20:05:04.717947 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-06-22 20:05:04.717956 | orchestrator | Sunday 22 June 2025 20:04:16 +0000 (0:00:00.712) 0:01:48.707 *********** 2025-06-22 20:05:04.717966 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:05:04.717976 | orchestrator | 2025-06-22 20:05:04.717985 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-06-22 20:05:04.717995 | orchestrator | Sunday 22 June 2025 20:04:17 +0000 (0:00:01.693) 0:01:50.401 *********** 2025-06-22 20:05:04.718005 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-06-22 20:05:04.718066 | orchestrator | 2025-06-22 20:05:04.718101 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-06-22 20:05:04.718112 | orchestrator | Sunday 22 June 2025 20:04:28 +0000 (0:00:10.929) 0:02:01.330 *********** 2025-06-22 20:05:04.718121 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-06-22 20:05:04.718131 | orchestrator | 2025-06-22 20:05:04.718141 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-06-22 20:05:04.718151 | orchestrator | Sunday 22 June 2025 20:04:50 +0000 (0:00:22.031) 0:02:23.362 *********** 2025-06-22 20:05:04.718161 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-06-22 20:05:04.718171 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-06-22 20:05:04.718181 | orchestrator | 2025-06-22 20:05:04.718191 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-06-22 20:05:04.718201 | orchestrator | Sunday 22 June 2025 20:04:57 +0000 (0:00:06.512) 0:02:29.874 *********** 2025-06-22 20:05:04.718211 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:05:04.718220 | orchestrator | 2025-06-22 20:05:04.718231 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-06-22 20:05:04.718240 | orchestrator | Sunday 22 June 2025 20:04:57 +0000 (0:00:00.305) 0:02:30.180 *********** 2025-06-22 20:05:04.718250 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:05:04.718260 | orchestrator | 2025-06-22 20:05:04.718269 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-06-22 20:05:04.718287 | orchestrator | Sunday 22 June 2025 20:04:57 +0000 (0:00:00.128) 0:02:30.309 *********** 2025-06-22 20:05:04.718298 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:05:04.718307 | orchestrator | 2025-06-22 20:05:04.718317 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-06-22 20:05:04.718327 | orchestrator | Sunday 22 June 2025 20:04:57 +0000 (0:00:00.169) 0:02:30.479 *********** 2025-06-22 20:05:04.718336 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:05:04.718346 | orchestrator | 2025-06-22 20:05:04.718356 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-06-22 20:05:04.718366 | orchestrator | Sunday 22 June 2025 20:04:58 +0000 (0:00:00.326) 0:02:30.805 *********** 2025-06-22 20:05:04.718375 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:05:04.718385 | orchestrator | 2025-06-22 20:05:04.718395 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-06-22 20:05:04.718404 | orchestrator | Sunday 22 June 2025 20:05:01 +0000 (0:00:03.397) 0:02:34.203 *********** 2025-06-22 20:05:04.718414 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:05:04.718424 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:05:04.718433 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:05:04.718443 | orchestrator | 2025-06-22 20:05:04.718453 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 20:05:04.718463 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-06-22 20:05:04.718481 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-06-22 20:05:04.718492 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-06-22 20:05:04.718502 | orchestrator | 2025-06-22 20:05:04.718512 | orchestrator | 2025-06-22 20:05:04.718522 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 20:05:04.718531 | orchestrator | Sunday 22 June 2025 20:05:02 +0000 (0:00:00.595) 0:02:34.798 *********** 2025-06-22 20:05:04.718541 | orchestrator | =============================================================================== 2025-06-22 20:05:04.718551 | orchestrator | service-ks-register : keystone | Creating services --------------------- 22.03s 2025-06-22 20:05:04.718560 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 19.05s 2025-06-22 20:05:04.718570 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 13.74s 2025-06-22 20:05:04.718580 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 10.93s 2025-06-22 20:05:04.718589 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 10.35s 2025-06-22 20:05:04.718599 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 8.86s 2025-06-22 20:05:04.718608 | orchestrator | keystone : Restart keystone container ----------------------------------- 7.85s 2025-06-22 20:05:04.718618 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 7.70s 2025-06-22 20:05:04.718628 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 6.51s 2025-06-22 20:05:04.718637 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 4.95s 2025-06-22 20:05:04.718651 | orchestrator | keystone : Creating default user role ----------------------------------- 3.40s 2025-06-22 20:05:04.718662 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.40s 2025-06-22 20:05:04.718671 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.11s 2025-06-22 20:05:04.718681 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.56s 2025-06-22 20:05:04.718690 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.39s 2025-06-22 20:05:04.718700 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.34s 2025-06-22 20:05:04.718710 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.24s 2025-06-22 20:05:04.718719 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.20s 2025-06-22 20:05:04.718729 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.92s 2025-06-22 20:05:04.718738 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.75s 2025-06-22 20:05:04.718748 | orchestrator | 2025-06-22 20:05:04 | INFO  | Task 1fe1d1a9-d01a-4de9-87ea-ee48d37d85d0 is in state STARTED 2025-06-22 20:05:04.718758 | orchestrator | 2025-06-22 20:05:04 | INFO  | Task 020077a1-37de-45b7-92d8-ea80594119de is in state STARTED 2025-06-22 20:05:04.718768 | orchestrator | 2025-06-22 20:05:04 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:05:07.753606 | orchestrator | 2025-06-22 20:05:07 | INFO  | Task e5c0f822-ce6c-444c-a2bf-938e94a4477c is in state STARTED 2025-06-22 20:05:07.753727 | orchestrator | 2025-06-22 20:05:07 | INFO  | Task 7d021911-b951-41a1-a942-7697e622caf3 is in state STARTED 2025-06-22 20:05:07.753744 | orchestrator | 2025-06-22 20:05:07 | INFO  | Task 4a0f0d72-b0bb-4180-a478-51f31e6edf32 is in state STARTED 2025-06-22 20:05:07.753756 | orchestrator | 2025-06-22 20:05:07 | INFO  | Task 1fe1d1a9-d01a-4de9-87ea-ee48d37d85d0 is in state STARTED 2025-06-22 20:05:07.753801 | orchestrator | 2025-06-22 20:05:07 | INFO  | Task 020077a1-37de-45b7-92d8-ea80594119de is in state STARTED 2025-06-22 20:05:07.753813 | orchestrator | 2025-06-22 20:05:07 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:05:10.786735 | orchestrator | 2025-06-22 20:05:10 | INFO  | Task e5c0f822-ce6c-444c-a2bf-938e94a4477c is in state STARTED 2025-06-22 20:05:10.789128 | orchestrator | 2025-06-22 20:05:10 | INFO  | Task 7d021911-b951-41a1-a942-7697e622caf3 is in state SUCCESS 2025-06-22 20:05:10.791008 | orchestrator | 2025-06-22 20:05:10 | INFO  | Task 4a0f0d72-b0bb-4180-a478-51f31e6edf32 is in state STARTED 2025-06-22 20:05:10.792582 | orchestrator | 2025-06-22 20:05:10 | INFO  | Task 1fe1d1a9-d01a-4de9-87ea-ee48d37d85d0 is in state STARTED 2025-06-22 20:05:10.794145 | orchestrator | 2025-06-22 20:05:10 | INFO  | Task 1725b711-fb49-400f-a5d8-e8e93530b2e9 is in state STARTED 2025-06-22 20:05:10.795422 | orchestrator | 2025-06-22 20:05:10 | INFO  | Task 020077a1-37de-45b7-92d8-ea80594119de is in state STARTED 2025-06-22 20:05:10.795620 | orchestrator | 2025-06-22 20:05:10 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:05:13.840610 | orchestrator | 2025-06-22 20:05:13 | INFO  | Task e5c0f822-ce6c-444c-a2bf-938e94a4477c is in state STARTED 2025-06-22 20:05:13.841464 | orchestrator | 2025-06-22 20:05:13 | INFO  | Task 4a0f0d72-b0bb-4180-a478-51f31e6edf32 is in state STARTED 2025-06-22 20:05:13.843615 | orchestrator | 2025-06-22 20:05:13 | INFO  | Task 1fe1d1a9-d01a-4de9-87ea-ee48d37d85d0 is in state STARTED 2025-06-22 20:05:13.845793 | orchestrator | 2025-06-22 20:05:13 | INFO  | Task 1725b711-fb49-400f-a5d8-e8e93530b2e9 is in state STARTED 2025-06-22 20:05:13.848169 | orchestrator | 2025-06-22 20:05:13 | INFO  | Task 020077a1-37de-45b7-92d8-ea80594119de is in state STARTED 2025-06-22 20:05:13.848399 | orchestrator | 2025-06-22 20:05:13 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:05:16.896438 | orchestrator | 2025-06-22 20:05:16 | INFO  | Task e5c0f822-ce6c-444c-a2bf-938e94a4477c is in state STARTED 2025-06-22 20:05:16.898611 | orchestrator | 2025-06-22 20:05:16 | INFO  | Task 4a0f0d72-b0bb-4180-a478-51f31e6edf32 is in state STARTED 2025-06-22 20:05:16.900191 | orchestrator | 2025-06-22 20:05:16 | INFO  | Task 1fe1d1a9-d01a-4de9-87ea-ee48d37d85d0 is in state STARTED 2025-06-22 20:05:16.902090 | orchestrator | 2025-06-22 20:05:16 | INFO  | Task 1725b711-fb49-400f-a5d8-e8e93530b2e9 is in state STARTED 2025-06-22 20:05:16.904920 | orchestrator | 2025-06-22 20:05:16 | INFO  | Task 020077a1-37de-45b7-92d8-ea80594119de is in state STARTED 2025-06-22 20:05:16.904948 | orchestrator | 2025-06-22 20:05:16 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:05:19.947878 | orchestrator | 2025-06-22 20:05:19 | INFO  | Task e5c0f822-ce6c-444c-a2bf-938e94a4477c is in state STARTED 2025-06-22 20:05:19.948970 | orchestrator | 2025-06-22 20:05:19 | INFO  | Task 4a0f0d72-b0bb-4180-a478-51f31e6edf32 is in state STARTED 2025-06-22 20:05:19.950707 | orchestrator | 2025-06-22 20:05:19 | INFO  | Task 1fe1d1a9-d01a-4de9-87ea-ee48d37d85d0 is in state STARTED 2025-06-22 20:05:19.952127 | orchestrator | 2025-06-22 20:05:19 | INFO  | Task 1725b711-fb49-400f-a5d8-e8e93530b2e9 is in state STARTED 2025-06-22 20:05:19.955261 | orchestrator | 2025-06-22 20:05:19 | INFO  | Task 020077a1-37de-45b7-92d8-ea80594119de is in state STARTED 2025-06-22 20:05:19.955284 | orchestrator | 2025-06-22 20:05:19 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:05:23.002911 | orchestrator | 2025-06-22 20:05:22 | INFO  | Task e5c0f822-ce6c-444c-a2bf-938e94a4477c is in state STARTED 2025-06-22 20:05:23.004880 | orchestrator | 2025-06-22 20:05:23 | INFO  | Task 4a0f0d72-b0bb-4180-a478-51f31e6edf32 is in state STARTED 2025-06-22 20:05:23.007871 | orchestrator | 2025-06-22 20:05:23 | INFO  | Task 1fe1d1a9-d01a-4de9-87ea-ee48d37d85d0 is in state STARTED 2025-06-22 20:05:23.009619 | orchestrator | 2025-06-22 20:05:23 | INFO  | Task 1725b711-fb49-400f-a5d8-e8e93530b2e9 is in state STARTED 2025-06-22 20:05:23.010866 | orchestrator | 2025-06-22 20:05:23 | INFO  | Task 020077a1-37de-45b7-92d8-ea80594119de is in state STARTED 2025-06-22 20:05:23.010889 | orchestrator | 2025-06-22 20:05:23 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:05:26.061800 | orchestrator | 2025-06-22 20:05:26 | INFO  | Task e5c0f822-ce6c-444c-a2bf-938e94a4477c is in state STARTED 2025-06-22 20:05:26.064393 | orchestrator | 2025-06-22 20:05:26 | INFO  | Task 4a0f0d72-b0bb-4180-a478-51f31e6edf32 is in state STARTED 2025-06-22 20:05:26.068682 | orchestrator | 2025-06-22 20:05:26 | INFO  | Task 1fe1d1a9-d01a-4de9-87ea-ee48d37d85d0 is in state STARTED 2025-06-22 20:05:26.071206 | orchestrator | 2025-06-22 20:05:26 | INFO  | Task 1725b711-fb49-400f-a5d8-e8e93530b2e9 is in state STARTED 2025-06-22 20:05:26.073004 | orchestrator | 2025-06-22 20:05:26 | INFO  | Task 020077a1-37de-45b7-92d8-ea80594119de is in state STARTED 2025-06-22 20:05:26.073461 | orchestrator | 2025-06-22 20:05:26 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:05:29.121968 | orchestrator | 2025-06-22 20:05:29 | INFO  | Task e5c0f822-ce6c-444c-a2bf-938e94a4477c is in state STARTED 2025-06-22 20:05:29.123983 | orchestrator | 2025-06-22 20:05:29 | INFO  | Task 4a0f0d72-b0bb-4180-a478-51f31e6edf32 is in state STARTED 2025-06-22 20:05:29.128532 | orchestrator | 2025-06-22 20:05:29 | INFO  | Task 1fe1d1a9-d01a-4de9-87ea-ee48d37d85d0 is in state STARTED 2025-06-22 20:05:29.130707 | orchestrator | 2025-06-22 20:05:29 | INFO  | Task 1725b711-fb49-400f-a5d8-e8e93530b2e9 is in state STARTED 2025-06-22 20:05:29.132882 | orchestrator | 2025-06-22 20:05:29 | INFO  | Task 020077a1-37de-45b7-92d8-ea80594119de is in state STARTED 2025-06-22 20:05:29.132937 | orchestrator | 2025-06-22 20:05:29 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:05:32.179496 | orchestrator | 2025-06-22 20:05:32 | INFO  | Task e5c0f822-ce6c-444c-a2bf-938e94a4477c is in state STARTED 2025-06-22 20:05:32.180657 | orchestrator | 2025-06-22 20:05:32 | INFO  | Task 4a0f0d72-b0bb-4180-a478-51f31e6edf32 is in state STARTED 2025-06-22 20:05:32.182119 | orchestrator | 2025-06-22 20:05:32 | INFO  | Task 1fe1d1a9-d01a-4de9-87ea-ee48d37d85d0 is in state STARTED 2025-06-22 20:05:32.183483 | orchestrator | 2025-06-22 20:05:32 | INFO  | Task 1725b711-fb49-400f-a5d8-e8e93530b2e9 is in state STARTED 2025-06-22 20:05:32.186409 | orchestrator | 2025-06-22 20:05:32 | INFO  | Task 020077a1-37de-45b7-92d8-ea80594119de is in state STARTED 2025-06-22 20:05:32.186434 | orchestrator | 2025-06-22 20:05:32 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:05:35.229472 | orchestrator | 2025-06-22 20:05:35 | INFO  | Task e5c0f822-ce6c-444c-a2bf-938e94a4477c is in state STARTED 2025-06-22 20:05:35.231576 | orchestrator | 2025-06-22 20:05:35 | INFO  | Task 4a0f0d72-b0bb-4180-a478-51f31e6edf32 is in state STARTED 2025-06-22 20:05:35.233402 | orchestrator | 2025-06-22 20:05:35 | INFO  | Task 1fe1d1a9-d01a-4de9-87ea-ee48d37d85d0 is in state STARTED 2025-06-22 20:05:35.235235 | orchestrator | 2025-06-22 20:05:35 | INFO  | Task 1725b711-fb49-400f-a5d8-e8e93530b2e9 is in state STARTED 2025-06-22 20:05:35.236615 | orchestrator | 2025-06-22 20:05:35 | INFO  | Task 020077a1-37de-45b7-92d8-ea80594119de is in state STARTED 2025-06-22 20:05:35.236645 | orchestrator | 2025-06-22 20:05:35 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:05:38.265992 | orchestrator | 2025-06-22 20:05:38 | INFO  | Task e5c0f822-ce6c-444c-a2bf-938e94a4477c is in state STARTED 2025-06-22 20:05:38.266911 | orchestrator | 2025-06-22 20:05:38 | INFO  | Task 4a0f0d72-b0bb-4180-a478-51f31e6edf32 is in state STARTED 2025-06-22 20:05:38.268060 | orchestrator | 2025-06-22 20:05:38 | INFO  | Task 1fe1d1a9-d01a-4de9-87ea-ee48d37d85d0 is in state STARTED 2025-06-22 20:05:38.268716 | orchestrator | 2025-06-22 20:05:38 | INFO  | Task 1725b711-fb49-400f-a5d8-e8e93530b2e9 is in state STARTED 2025-06-22 20:05:38.269653 | orchestrator | 2025-06-22 20:05:38 | INFO  | Task 020077a1-37de-45b7-92d8-ea80594119de is in state STARTED 2025-06-22 20:05:38.269681 | orchestrator | 2025-06-22 20:05:38 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:05:41.322463 | orchestrator | 2025-06-22 20:05:41 | INFO  | Task e5c0f822-ce6c-444c-a2bf-938e94a4477c is in state STARTED 2025-06-22 20:05:41.323785 | orchestrator | 2025-06-22 20:05:41 | INFO  | Task 4a0f0d72-b0bb-4180-a478-51f31e6edf32 is in state STARTED 2025-06-22 20:05:41.326956 | orchestrator | 2025-06-22 20:05:41 | INFO  | Task 1fe1d1a9-d01a-4de9-87ea-ee48d37d85d0 is in state STARTED 2025-06-22 20:05:41.329635 | orchestrator | 2025-06-22 20:05:41 | INFO  | Task 1725b711-fb49-400f-a5d8-e8e93530b2e9 is in state STARTED 2025-06-22 20:05:41.332357 | orchestrator | 2025-06-22 20:05:41 | INFO  | Task 020077a1-37de-45b7-92d8-ea80594119de is in state STARTED 2025-06-22 20:05:41.332808 | orchestrator | 2025-06-22 20:05:41 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:05:44.373395 | orchestrator | 2025-06-22 20:05:44 | INFO  | Task e5c0f822-ce6c-444c-a2bf-938e94a4477c is in state STARTED 2025-06-22 20:05:44.374417 | orchestrator | 2025-06-22 20:05:44 | INFO  | Task 4a0f0d72-b0bb-4180-a478-51f31e6edf32 is in state STARTED 2025-06-22 20:05:44.376026 | orchestrator | 2025-06-22 20:05:44 | INFO  | Task 1fe1d1a9-d01a-4de9-87ea-ee48d37d85d0 is in state STARTED 2025-06-22 20:05:44.376986 | orchestrator | 2025-06-22 20:05:44 | INFO  | Task 1725b711-fb49-400f-a5d8-e8e93530b2e9 is in state STARTED 2025-06-22 20:05:44.378183 | orchestrator | 2025-06-22 20:05:44 | INFO  | Task 020077a1-37de-45b7-92d8-ea80594119de is in state STARTED 2025-06-22 20:05:44.378205 | orchestrator | 2025-06-22 20:05:44 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:05:47.401841 | orchestrator | 2025-06-22 20:05:47 | INFO  | Task e5c0f822-ce6c-444c-a2bf-938e94a4477c is in state STARTED 2025-06-22 20:05:47.402402 | orchestrator | 2025-06-22 20:05:47 | INFO  | Task 4a0f0d72-b0bb-4180-a478-51f31e6edf32 is in state STARTED 2025-06-22 20:05:47.403275 | orchestrator | 2025-06-22 20:05:47 | INFO  | Task 1fe1d1a9-d01a-4de9-87ea-ee48d37d85d0 is in state STARTED 2025-06-22 20:05:47.403946 | orchestrator | 2025-06-22 20:05:47 | INFO  | Task 1725b711-fb49-400f-a5d8-e8e93530b2e9 is in state STARTED 2025-06-22 20:05:47.404604 | orchestrator | 2025-06-22 20:05:47 | INFO  | Task 020077a1-37de-45b7-92d8-ea80594119de is in state STARTED 2025-06-22 20:05:47.404624 | orchestrator | 2025-06-22 20:05:47 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:05:50.429663 | orchestrator | 2025-06-22 20:05:50 | INFO  | Task e5c0f822-ce6c-444c-a2bf-938e94a4477c is in state STARTED 2025-06-22 20:05:50.429736 | orchestrator | 2025-06-22 20:05:50 | INFO  | Task 4a0f0d72-b0bb-4180-a478-51f31e6edf32 is in state STARTED 2025-06-22 20:05:50.431545 | orchestrator | 2025-06-22 20:05:50 | INFO  | Task 1fe1d1a9-d01a-4de9-87ea-ee48d37d85d0 is in state STARTED 2025-06-22 20:05:50.431582 | orchestrator | 2025-06-22 20:05:50 | INFO  | Task 1725b711-fb49-400f-a5d8-e8e93530b2e9 is in state STARTED 2025-06-22 20:05:50.432152 | orchestrator | 2025-06-22 20:05:50 | INFO  | Task 020077a1-37de-45b7-92d8-ea80594119de is in state STARTED 2025-06-22 20:05:50.432409 | orchestrator | 2025-06-22 20:05:50 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:05:53.468809 | orchestrator | 2025-06-22 20:05:53 | INFO  | Task e5c0f822-ce6c-444c-a2bf-938e94a4477c is in state STARTED 2025-06-22 20:05:53.470982 | orchestrator | 2025-06-22 20:05:53 | INFO  | Task 4a0f0d72-b0bb-4180-a478-51f31e6edf32 is in state STARTED 2025-06-22 20:05:53.471846 | orchestrator | 2025-06-22 20:05:53 | INFO  | Task 1fe1d1a9-d01a-4de9-87ea-ee48d37d85d0 is in state STARTED 2025-06-22 20:05:53.472775 | orchestrator | 2025-06-22 20:05:53 | INFO  | Task 1725b711-fb49-400f-a5d8-e8e93530b2e9 is in state STARTED 2025-06-22 20:05:53.473637 | orchestrator | 2025-06-22 20:05:53 | INFO  | Task 020077a1-37de-45b7-92d8-ea80594119de is in state STARTED 2025-06-22 20:05:53.473679 | orchestrator | 2025-06-22 20:05:53 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:05:56.511238 | orchestrator | 2025-06-22 20:05:56 | INFO  | Task e5c0f822-ce6c-444c-a2bf-938e94a4477c is in state STARTED 2025-06-22 20:05:56.511340 | orchestrator | 2025-06-22 20:05:56 | INFO  | Task 4a0f0d72-b0bb-4180-a478-51f31e6edf32 is in state STARTED 2025-06-22 20:05:56.512312 | orchestrator | 2025-06-22 20:05:56 | INFO  | Task 1fe1d1a9-d01a-4de9-87ea-ee48d37d85d0 is in state STARTED 2025-06-22 20:05:56.513412 | orchestrator | 2025-06-22 20:05:56 | INFO  | Task 1725b711-fb49-400f-a5d8-e8e93530b2e9 is in state STARTED 2025-06-22 20:05:56.514478 | orchestrator | 2025-06-22 20:05:56 | INFO  | Task 020077a1-37de-45b7-92d8-ea80594119de is in state STARTED 2025-06-22 20:05:56.514505 | orchestrator | 2025-06-22 20:05:56 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:05:59.562732 | orchestrator | 2025-06-22 20:05:59 | INFO  | Task e5c0f822-ce6c-444c-a2bf-938e94a4477c is in state STARTED 2025-06-22 20:05:59.564251 | orchestrator | 2025-06-22 20:05:59 | INFO  | Task 4a0f0d72-b0bb-4180-a478-51f31e6edf32 is in state STARTED 2025-06-22 20:05:59.566092 | orchestrator | 2025-06-22 20:05:59 | INFO  | Task 1fe1d1a9-d01a-4de9-87ea-ee48d37d85d0 is in state STARTED 2025-06-22 20:05:59.567471 | orchestrator | 2025-06-22 20:05:59 | INFO  | Task 1725b711-fb49-400f-a5d8-e8e93530b2e9 is in state STARTED 2025-06-22 20:05:59.569082 | orchestrator | 2025-06-22 20:05:59 | INFO  | Task 020077a1-37de-45b7-92d8-ea80594119de is in state STARTED 2025-06-22 20:05:59.569717 | orchestrator | 2025-06-22 20:05:59 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:06:02.602945 | orchestrator | 2025-06-22 20:06:02 | INFO  | Task e5c0f822-ce6c-444c-a2bf-938e94a4477c is in state STARTED 2025-06-22 20:06:02.603210 | orchestrator | 2025-06-22 20:06:02 | INFO  | Task 4a0f0d72-b0bb-4180-a478-51f31e6edf32 is in state STARTED 2025-06-22 20:06:02.603968 | orchestrator | 2025-06-22 20:06:02 | INFO  | Task 1fe1d1a9-d01a-4de9-87ea-ee48d37d85d0 is in state STARTED 2025-06-22 20:06:02.604801 | orchestrator | 2025-06-22 20:06:02 | INFO  | Task 1725b711-fb49-400f-a5d8-e8e93530b2e9 is in state STARTED 2025-06-22 20:06:02.605416 | orchestrator | 2025-06-22 20:06:02 | INFO  | Task 020077a1-37de-45b7-92d8-ea80594119de is in state STARTED 2025-06-22 20:06:02.605444 | orchestrator | 2025-06-22 20:06:02 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:06:05.629329 | orchestrator | 2025-06-22 20:06:05 | INFO  | Task e5c0f822-ce6c-444c-a2bf-938e94a4477c is in state STARTED 2025-06-22 20:06:05.629797 | orchestrator | 2025-06-22 20:06:05 | INFO  | Task 4a0f0d72-b0bb-4180-a478-51f31e6edf32 is in state STARTED 2025-06-22 20:06:05.630753 | orchestrator | 2025-06-22 20:06:05 | INFO  | Task 1fe1d1a9-d01a-4de9-87ea-ee48d37d85d0 is in state STARTED 2025-06-22 20:06:05.631295 | orchestrator | 2025-06-22 20:06:05 | INFO  | Task 1725b711-fb49-400f-a5d8-e8e93530b2e9 is in state STARTED 2025-06-22 20:06:05.632128 | orchestrator | 2025-06-22 20:06:05 | INFO  | Task 020077a1-37de-45b7-92d8-ea80594119de is in state STARTED 2025-06-22 20:06:05.634163 | orchestrator | 2025-06-22 20:06:05 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:06:08.675484 | orchestrator | 2025-06-22 20:06:08 | INFO  | Task e5c0f822-ce6c-444c-a2bf-938e94a4477c is in state STARTED 2025-06-22 20:06:08.675776 | orchestrator | 2025-06-22 20:06:08 | INFO  | Task 4a0f0d72-b0bb-4180-a478-51f31e6edf32 is in state STARTED 2025-06-22 20:06:08.676598 | orchestrator | 2025-06-22 20:06:08 | INFO  | Task 1fe1d1a9-d01a-4de9-87ea-ee48d37d85d0 is in state STARTED 2025-06-22 20:06:08.677209 | orchestrator | 2025-06-22 20:06:08 | INFO  | Task 1725b711-fb49-400f-a5d8-e8e93530b2e9 is in state STARTED 2025-06-22 20:06:08.678226 | orchestrator | 2025-06-22 20:06:08 | INFO  | Task 020077a1-37de-45b7-92d8-ea80594119de is in state STARTED 2025-06-22 20:06:08.678316 | orchestrator | 2025-06-22 20:06:08 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:06:11.711200 | orchestrator | 2025-06-22 20:06:11 | INFO  | Task e5c0f822-ce6c-444c-a2bf-938e94a4477c is in state STARTED 2025-06-22 20:06:11.711651 | orchestrator | 2025-06-22 20:06:11 | INFO  | Task 4a0f0d72-b0bb-4180-a478-51f31e6edf32 is in state STARTED 2025-06-22 20:06:11.712611 | orchestrator | 2025-06-22 20:06:11 | INFO  | Task 1fe1d1a9-d01a-4de9-87ea-ee48d37d85d0 is in state STARTED 2025-06-22 20:06:11.713355 | orchestrator | 2025-06-22 20:06:11 | INFO  | Task 1725b711-fb49-400f-a5d8-e8e93530b2e9 is in state STARTED 2025-06-22 20:06:11.714359 | orchestrator | 2025-06-22 20:06:11 | INFO  | Task 020077a1-37de-45b7-92d8-ea80594119de is in state STARTED 2025-06-22 20:06:11.714390 | orchestrator | 2025-06-22 20:06:11 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:06:14.749853 | orchestrator | 2025-06-22 20:06:14 | INFO  | Task e5c0f822-ce6c-444c-a2bf-938e94a4477c is in state STARTED 2025-06-22 20:06:14.756474 | orchestrator | 2025-06-22 20:06:14 | INFO  | Task 4a0f0d72-b0bb-4180-a478-51f31e6edf32 is in state STARTED 2025-06-22 20:06:14.757210 | orchestrator | 2025-06-22 20:06:14 | INFO  | Task 1fe1d1a9-d01a-4de9-87ea-ee48d37d85d0 is in state STARTED 2025-06-22 20:06:14.758420 | orchestrator | 2025-06-22 20:06:14 | INFO  | Task 1725b711-fb49-400f-a5d8-e8e93530b2e9 is in state STARTED 2025-06-22 20:06:14.760368 | orchestrator | 2025-06-22 20:06:14 | INFO  | Task 020077a1-37de-45b7-92d8-ea80594119de is in state STARTED 2025-06-22 20:06:14.760426 | orchestrator | 2025-06-22 20:06:14 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:06:17.796577 | orchestrator | 2025-06-22 20:06:17 | INFO  | Task e5c0f822-ce6c-444c-a2bf-938e94a4477c is in state STARTED 2025-06-22 20:06:17.796786 | orchestrator | 2025-06-22 20:06:17 | INFO  | Task 4a0f0d72-b0bb-4180-a478-51f31e6edf32 is in state STARTED 2025-06-22 20:06:17.797245 | orchestrator | 2025-06-22 20:06:17 | INFO  | Task 1fe1d1a9-d01a-4de9-87ea-ee48d37d85d0 is in state STARTED 2025-06-22 20:06:17.797759 | orchestrator | 2025-06-22 20:06:17 | INFO  | Task 1725b711-fb49-400f-a5d8-e8e93530b2e9 is in state STARTED 2025-06-22 20:06:17.798288 | orchestrator | 2025-06-22 20:06:17 | INFO  | Task 020077a1-37de-45b7-92d8-ea80594119de is in state STARTED 2025-06-22 20:06:17.798314 | orchestrator | 2025-06-22 20:06:17 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:06:20.830637 | orchestrator | 2025-06-22 20:06:20 | INFO  | Task e5c0f822-ce6c-444c-a2bf-938e94a4477c is in state STARTED 2025-06-22 20:06:20.830944 | orchestrator | 2025-06-22 20:06:20 | INFO  | Task 4a0f0d72-b0bb-4180-a478-51f31e6edf32 is in state STARTED 2025-06-22 20:06:20.831685 | orchestrator | 2025-06-22 20:06:20 | INFO  | Task 1fe1d1a9-d01a-4de9-87ea-ee48d37d85d0 is in state STARTED 2025-06-22 20:06:20.832617 | orchestrator | 2025-06-22 20:06:20 | INFO  | Task 1725b711-fb49-400f-a5d8-e8e93530b2e9 is in state STARTED 2025-06-22 20:06:20.833309 | orchestrator | 2025-06-22 20:06:20 | INFO  | Task 020077a1-37de-45b7-92d8-ea80594119de is in state STARTED 2025-06-22 20:06:20.833428 | orchestrator | 2025-06-22 20:06:20 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:06:23.861642 | orchestrator | 2025-06-22 20:06:23 | INFO  | Task e5c0f822-ce6c-444c-a2bf-938e94a4477c is in state STARTED 2025-06-22 20:06:23.862135 | orchestrator | 2025-06-22 20:06:23 | INFO  | Task 4a0f0d72-b0bb-4180-a478-51f31e6edf32 is in state STARTED 2025-06-22 20:06:23.862803 | orchestrator | 2025-06-22 20:06:23 | INFO  | Task 1fe1d1a9-d01a-4de9-87ea-ee48d37d85d0 is in state STARTED 2025-06-22 20:06:23.863556 | orchestrator | 2025-06-22 20:06:23 | INFO  | Task 1725b711-fb49-400f-a5d8-e8e93530b2e9 is in state STARTED 2025-06-22 20:06:23.864392 | orchestrator | 2025-06-22 20:06:23 | INFO  | Task 020077a1-37de-45b7-92d8-ea80594119de is in state STARTED 2025-06-22 20:06:23.864477 | orchestrator | 2025-06-22 20:06:23 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:06:26.890427 | orchestrator | 2025-06-22 20:06:26 | INFO  | Task e5c0f822-ce6c-444c-a2bf-938e94a4477c is in state STARTED 2025-06-22 20:06:26.890766 | orchestrator | 2025-06-22 20:06:26 | INFO  | Task 4a0f0d72-b0bb-4180-a478-51f31e6edf32 is in state STARTED 2025-06-22 20:06:26.891467 | orchestrator | 2025-06-22 20:06:26 | INFO  | Task 1fe1d1a9-d01a-4de9-87ea-ee48d37d85d0 is in state STARTED 2025-06-22 20:06:26.892311 | orchestrator | 2025-06-22 20:06:26 | INFO  | Task 1725b711-fb49-400f-a5d8-e8e93530b2e9 is in state STARTED 2025-06-22 20:06:26.892829 | orchestrator | 2025-06-22 20:06:26 | INFO  | Task 020077a1-37de-45b7-92d8-ea80594119de is in state STARTED 2025-06-22 20:06:26.892925 | orchestrator | 2025-06-22 20:06:26 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:06:29.921754 | orchestrator | 2025-06-22 20:06:29 | INFO  | Task e5c0f822-ce6c-444c-a2bf-938e94a4477c is in state STARTED 2025-06-22 20:06:29.922581 | orchestrator | 2025-06-22 20:06:29 | INFO  | Task 4a0f0d72-b0bb-4180-a478-51f31e6edf32 is in state STARTED 2025-06-22 20:06:29.923607 | orchestrator | 2025-06-22 20:06:29 | INFO  | Task 1fe1d1a9-d01a-4de9-87ea-ee48d37d85d0 is in state STARTED 2025-06-22 20:06:29.924281 | orchestrator | 2025-06-22 20:06:29 | INFO  | Task 1725b711-fb49-400f-a5d8-e8e93530b2e9 is in state STARTED 2025-06-22 20:06:29.925322 | orchestrator | 2025-06-22 20:06:29 | INFO  | Task 020077a1-37de-45b7-92d8-ea80594119de is in state STARTED 2025-06-22 20:06:29.925346 | orchestrator | 2025-06-22 20:06:29 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:06:32.955717 | orchestrator | 2025-06-22 20:06:32 | INFO  | Task e5c0f822-ce6c-444c-a2bf-938e94a4477c is in state STARTED 2025-06-22 20:06:32.956523 | orchestrator | 2025-06-22 20:06:32 | INFO  | Task cf53afaf-4f09-4b31-b2a3-9e2430ebb015 is in state STARTED 2025-06-22 20:06:32.957349 | orchestrator | 2025-06-22 20:06:32 | INFO  | Task 4a0f0d72-b0bb-4180-a478-51f31e6edf32 is in state STARTED 2025-06-22 20:06:32.958183 | orchestrator | 2025-06-22 20:06:32 | INFO  | Task 1fe1d1a9-d01a-4de9-87ea-ee48d37d85d0 is in state STARTED 2025-06-22 20:06:32.961308 | orchestrator | 2025-06-22 20:06:32 | INFO  | Task 1725b711-fb49-400f-a5d8-e8e93530b2e9 is in state STARTED 2025-06-22 20:06:32.963269 | orchestrator | 2025-06-22 20:06:32 | INFO  | Task 020077a1-37de-45b7-92d8-ea80594119de is in state STARTED 2025-06-22 20:06:32.963300 | orchestrator | 2025-06-22 20:06:32 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:06:35.990326 | orchestrator | 2025-06-22 20:06:35 | INFO  | Task e5c0f822-ce6c-444c-a2bf-938e94a4477c is in state STARTED 2025-06-22 20:06:35.990744 | orchestrator | 2025-06-22 20:06:35 | INFO  | Task cf53afaf-4f09-4b31-b2a3-9e2430ebb015 is in state STARTED 2025-06-22 20:06:35.991480 | orchestrator | 2025-06-22 20:06:35 | INFO  | Task 4a0f0d72-b0bb-4180-a478-51f31e6edf32 is in state STARTED 2025-06-22 20:06:35.992253 | orchestrator | 2025-06-22 20:06:35 | INFO  | Task 1fe1d1a9-d01a-4de9-87ea-ee48d37d85d0 is in state STARTED 2025-06-22 20:06:35.993170 | orchestrator | 2025-06-22 20:06:35 | INFO  | Task 1725b711-fb49-400f-a5d8-e8e93530b2e9 is in state STARTED 2025-06-22 20:06:35.998700 | orchestrator | 2025-06-22 20:06:35 | INFO  | Task 020077a1-37de-45b7-92d8-ea80594119de is in state STARTED 2025-06-22 20:06:35.998766 | orchestrator | 2025-06-22 20:06:35 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:06:39.024355 | orchestrator | 2025-06-22 20:06:39 | INFO  | Task e5c0f822-ce6c-444c-a2bf-938e94a4477c is in state STARTED 2025-06-22 20:06:39.025562 | orchestrator | 2025-06-22 20:06:39 | INFO  | Task cf53afaf-4f09-4b31-b2a3-9e2430ebb015 is in state STARTED 2025-06-22 20:06:39.026088 | orchestrator | 2025-06-22 20:06:39 | INFO  | Task 4a0f0d72-b0bb-4180-a478-51f31e6edf32 is in state STARTED 2025-06-22 20:06:39.026823 | orchestrator | 2025-06-22 20:06:39 | INFO  | Task 1fe1d1a9-d01a-4de9-87ea-ee48d37d85d0 is in state STARTED 2025-06-22 20:06:39.029401 | orchestrator | 2025-06-22 20:06:39 | INFO  | Task 1725b711-fb49-400f-a5d8-e8e93530b2e9 is in state STARTED 2025-06-22 20:06:39.029983 | orchestrator | 2025-06-22 20:06:39 | INFO  | Task 020077a1-37de-45b7-92d8-ea80594119de is in state STARTED 2025-06-22 20:06:39.030122 | orchestrator | 2025-06-22 20:06:39 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:06:42.064740 | orchestrator | 2025-06-22 20:06:42 | INFO  | Task e5c0f822-ce6c-444c-a2bf-938e94a4477c is in state STARTED 2025-06-22 20:06:42.065125 | orchestrator | 2025-06-22 20:06:42 | INFO  | Task cf53afaf-4f09-4b31-b2a3-9e2430ebb015 is in state STARTED 2025-06-22 20:06:42.067216 | orchestrator | 2025-06-22 20:06:42 | INFO  | Task 4a0f0d72-b0bb-4180-a478-51f31e6edf32 is in state STARTED 2025-06-22 20:06:42.067799 | orchestrator | 2025-06-22 20:06:42 | INFO  | Task 1fe1d1a9-d01a-4de9-87ea-ee48d37d85d0 is in state STARTED 2025-06-22 20:06:42.069791 | orchestrator | 2025-06-22 20:06:42 | INFO  | Task 1725b711-fb49-400f-a5d8-e8e93530b2e9 is in state STARTED 2025-06-22 20:06:42.070519 | orchestrator | 2025-06-22 20:06:42 | INFO  | Task 020077a1-37de-45b7-92d8-ea80594119de is in state STARTED 2025-06-22 20:06:42.070631 | orchestrator | 2025-06-22 20:06:42 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:06:45.099168 | orchestrator | 2025-06-22 20:06:45 | INFO  | Task e5c0f822-ce6c-444c-a2bf-938e94a4477c is in state STARTED 2025-06-22 20:06:45.100264 | orchestrator | 2025-06-22 20:06:45 | INFO  | Task cf53afaf-4f09-4b31-b2a3-9e2430ebb015 is in state STARTED 2025-06-22 20:06:45.101390 | orchestrator | 2025-06-22 20:06:45 | INFO  | Task 4a0f0d72-b0bb-4180-a478-51f31e6edf32 is in state STARTED 2025-06-22 20:06:45.108858 | orchestrator | 2025-06-22 20:06:45 | INFO  | Task 1fe1d1a9-d01a-4de9-87ea-ee48d37d85d0 is in state STARTED 2025-06-22 20:06:45.111291 | orchestrator | 2025-06-22 20:06:45 | INFO  | Task 1725b711-fb49-400f-a5d8-e8e93530b2e9 is in state STARTED 2025-06-22 20:06:45.111359 | orchestrator | 2025-06-22 20:06:45 | INFO  | Task 020077a1-37de-45b7-92d8-ea80594119de is in state STARTED 2025-06-22 20:06:45.111373 | orchestrator | 2025-06-22 20:06:45 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:06:48.145095 | orchestrator | 2025-06-22 20:06:48 | INFO  | Task e5c0f822-ce6c-444c-a2bf-938e94a4477c is in state STARTED 2025-06-22 20:06:48.145651 | orchestrator | 2025-06-22 20:06:48 | INFO  | Task cf53afaf-4f09-4b31-b2a3-9e2430ebb015 is in state STARTED 2025-06-22 20:06:48.146144 | orchestrator | 2025-06-22 20:06:48 | INFO  | Task 4a0f0d72-b0bb-4180-a478-51f31e6edf32 is in state STARTED 2025-06-22 20:06:48.146862 | orchestrator | 2025-06-22 20:06:48 | INFO  | Task 1fe1d1a9-d01a-4de9-87ea-ee48d37d85d0 is in state STARTED 2025-06-22 20:06:48.147352 | orchestrator | 2025-06-22 20:06:48 | INFO  | Task 1725b711-fb49-400f-a5d8-e8e93530b2e9 is in state SUCCESS 2025-06-22 20:06:48.147655 | orchestrator | 2025-06-22 20:06:48.147680 | orchestrator | 2025-06-22 20:06:48.147692 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-06-22 20:06:48.147704 | orchestrator | 2025-06-22 20:06:48.147716 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-06-22 20:06:48.147727 | orchestrator | Sunday 22 June 2025 20:04:12 +0000 (0:00:00.214) 0:00:00.214 *********** 2025-06-22 20:06:48.147739 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-06-22 20:06:48.147751 | orchestrator | 2025-06-22 20:06:48.147762 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-06-22 20:06:48.147774 | orchestrator | Sunday 22 June 2025 20:04:12 +0000 (0:00:00.214) 0:00:00.428 *********** 2025-06-22 20:06:48.147786 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-06-22 20:06:48.147797 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-06-22 20:06:48.147808 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-06-22 20:06:48.147851 | orchestrator | 2025-06-22 20:06:48.147863 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-06-22 20:06:48.147874 | orchestrator | Sunday 22 June 2025 20:04:14 +0000 (0:00:01.124) 0:00:01.553 *********** 2025-06-22 20:06:48.147886 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-06-22 20:06:48.147897 | orchestrator | 2025-06-22 20:06:48.147908 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-06-22 20:06:48.147919 | orchestrator | Sunday 22 June 2025 20:04:15 +0000 (0:00:01.142) 0:00:02.695 *********** 2025-06-22 20:06:48.147931 | orchestrator | changed: [testbed-manager] 2025-06-22 20:06:48.147966 | orchestrator | 2025-06-22 20:06:48.147978 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-06-22 20:06:48.147990 | orchestrator | Sunday 22 June 2025 20:04:16 +0000 (0:00:00.994) 0:00:03.690 *********** 2025-06-22 20:06:48.148001 | orchestrator | changed: [testbed-manager] 2025-06-22 20:06:48.148012 | orchestrator | 2025-06-22 20:06:48.148023 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-06-22 20:06:48.148065 | orchestrator | Sunday 22 June 2025 20:04:17 +0000 (0:00:00.870) 0:00:04.561 *********** 2025-06-22 20:06:48.148077 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-06-22 20:06:48.148088 | orchestrator | ok: [testbed-manager] 2025-06-22 20:06:48.148099 | orchestrator | 2025-06-22 20:06:48.148111 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-06-22 20:06:48.148122 | orchestrator | Sunday 22 June 2025 20:04:58 +0000 (0:00:41.194) 0:00:45.755 *********** 2025-06-22 20:06:48.148133 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-06-22 20:06:48.148144 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-06-22 20:06:48.148191 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-06-22 20:06:48.148203 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-06-22 20:06:48.148213 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-06-22 20:06:48.148224 | orchestrator | 2025-06-22 20:06:48.148235 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-06-22 20:06:48.148247 | orchestrator | Sunday 22 June 2025 20:05:02 +0000 (0:00:04.104) 0:00:49.860 *********** 2025-06-22 20:06:48.148257 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-06-22 20:06:48.148268 | orchestrator | 2025-06-22 20:06:48.148280 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-06-22 20:06:48.148293 | orchestrator | Sunday 22 June 2025 20:05:02 +0000 (0:00:00.482) 0:00:50.342 *********** 2025-06-22 20:06:48.148305 | orchestrator | skipping: [testbed-manager] 2025-06-22 20:06:48.148318 | orchestrator | 2025-06-22 20:06:48.148331 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-06-22 20:06:48.148343 | orchestrator | Sunday 22 June 2025 20:05:03 +0000 (0:00:00.118) 0:00:50.461 *********** 2025-06-22 20:06:48.148355 | orchestrator | skipping: [testbed-manager] 2025-06-22 20:06:48.148368 | orchestrator | 2025-06-22 20:06:48.148380 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-06-22 20:06:48.148393 | orchestrator | Sunday 22 June 2025 20:05:03 +0000 (0:00:00.290) 0:00:50.752 *********** 2025-06-22 20:06:48.148405 | orchestrator | changed: [testbed-manager] 2025-06-22 20:06:48.148418 | orchestrator | 2025-06-22 20:06:48.148430 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-06-22 20:06:48.148443 | orchestrator | Sunday 22 June 2025 20:05:05 +0000 (0:00:02.041) 0:00:52.793 *********** 2025-06-22 20:06:48.148455 | orchestrator | changed: [testbed-manager] 2025-06-22 20:06:48.148467 | orchestrator | 2025-06-22 20:06:48.148480 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-06-22 20:06:48.148504 | orchestrator | Sunday 22 June 2025 20:05:06 +0000 (0:00:00.931) 0:00:53.725 *********** 2025-06-22 20:06:48.148517 | orchestrator | changed: [testbed-manager] 2025-06-22 20:06:48.148529 | orchestrator | 2025-06-22 20:06:48.148543 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-06-22 20:06:48.148555 | orchestrator | Sunday 22 June 2025 20:05:06 +0000 (0:00:00.642) 0:00:54.368 *********** 2025-06-22 20:06:48.148568 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-06-22 20:06:48.148580 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-06-22 20:06:48.148593 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-06-22 20:06:48.148605 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-06-22 20:06:48.148617 | orchestrator | 2025-06-22 20:06:48.148631 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 20:06:48.148649 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 20:06:48.148668 | orchestrator | 2025-06-22 20:06:48.148680 | orchestrator | 2025-06-22 20:06:48.148703 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 20:06:48.148714 | orchestrator | Sunday 22 June 2025 20:05:08 +0000 (0:00:01.423) 0:00:55.791 *********** 2025-06-22 20:06:48.148726 | orchestrator | =============================================================================== 2025-06-22 20:06:48.148736 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 41.19s 2025-06-22 20:06:48.148747 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.10s 2025-06-22 20:06:48.148758 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 2.04s 2025-06-22 20:06:48.148769 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.42s 2025-06-22 20:06:48.148779 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.14s 2025-06-22 20:06:48.148790 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.12s 2025-06-22 20:06:48.148809 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.99s 2025-06-22 20:06:48.148820 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.93s 2025-06-22 20:06:48.148831 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.87s 2025-06-22 20:06:48.148842 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.64s 2025-06-22 20:06:48.148853 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.48s 2025-06-22 20:06:48.148863 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.29s 2025-06-22 20:06:48.148874 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.21s 2025-06-22 20:06:48.148885 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.12s 2025-06-22 20:06:48.148896 | orchestrator | 2025-06-22 20:06:48.148906 | orchestrator | 2025-06-22 20:06:48.148917 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-06-22 20:06:48.148928 | orchestrator | 2025-06-22 20:06:48.148939 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-06-22 20:06:48.148950 | orchestrator | Sunday 22 June 2025 20:05:11 +0000 (0:00:00.289) 0:00:00.289 *********** 2025-06-22 20:06:48.148961 | orchestrator | changed: [testbed-manager] 2025-06-22 20:06:48.148971 | orchestrator | 2025-06-22 20:06:48.148982 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-06-22 20:06:48.148993 | orchestrator | Sunday 22 June 2025 20:05:13 +0000 (0:00:01.988) 0:00:02.277 *********** 2025-06-22 20:06:48.149004 | orchestrator | changed: [testbed-manager] 2025-06-22 20:06:48.149014 | orchestrator | 2025-06-22 20:06:48.149025 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-06-22 20:06:48.149058 | orchestrator | Sunday 22 June 2025 20:05:14 +0000 (0:00:00.912) 0:00:03.190 *********** 2025-06-22 20:06:48.149070 | orchestrator | changed: [testbed-manager] 2025-06-22 20:06:48.149081 | orchestrator | 2025-06-22 20:06:48.149092 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-06-22 20:06:48.149103 | orchestrator | Sunday 22 June 2025 20:05:15 +0000 (0:00:00.969) 0:00:04.160 *********** 2025-06-22 20:06:48.149113 | orchestrator | changed: [testbed-manager] 2025-06-22 20:06:48.149124 | orchestrator | 2025-06-22 20:06:48.149135 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-06-22 20:06:48.149146 | orchestrator | Sunday 22 June 2025 20:05:16 +0000 (0:00:01.133) 0:00:05.293 *********** 2025-06-22 20:06:48.149157 | orchestrator | changed: [testbed-manager] 2025-06-22 20:06:48.149168 | orchestrator | 2025-06-22 20:06:48.149179 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-06-22 20:06:48.149190 | orchestrator | Sunday 22 June 2025 20:05:17 +0000 (0:00:01.013) 0:00:06.306 *********** 2025-06-22 20:06:48.149201 | orchestrator | changed: [testbed-manager] 2025-06-22 20:06:48.149212 | orchestrator | 2025-06-22 20:06:48.149223 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-06-22 20:06:48.149234 | orchestrator | Sunday 22 June 2025 20:05:18 +0000 (0:00:01.022) 0:00:07.329 *********** 2025-06-22 20:06:48.149245 | orchestrator | changed: [testbed-manager] 2025-06-22 20:06:48.149256 | orchestrator | 2025-06-22 20:06:48.149267 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-06-22 20:06:48.149278 | orchestrator | Sunday 22 June 2025 20:05:20 +0000 (0:00:02.051) 0:00:09.381 *********** 2025-06-22 20:06:48.149289 | orchestrator | changed: [testbed-manager] 2025-06-22 20:06:48.149300 | orchestrator | 2025-06-22 20:06:48.149311 | orchestrator | TASK [Create admin user] ******************************************************* 2025-06-22 20:06:48.149322 | orchestrator | Sunday 22 June 2025 20:05:22 +0000 (0:00:01.133) 0:00:10.514 *********** 2025-06-22 20:06:48.149333 | orchestrator | changed: [testbed-manager] 2025-06-22 20:06:48.149343 | orchestrator | 2025-06-22 20:06:48.149354 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-06-22 20:06:48.149371 | orchestrator | Sunday 22 June 2025 20:06:20 +0000 (0:00:58.492) 0:01:09.007 *********** 2025-06-22 20:06:48.149389 | orchestrator | skipping: [testbed-manager] 2025-06-22 20:06:48.149399 | orchestrator | 2025-06-22 20:06:48.149411 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-06-22 20:06:48.149422 | orchestrator | 2025-06-22 20:06:48.149433 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-06-22 20:06:48.149444 | orchestrator | Sunday 22 June 2025 20:06:20 +0000 (0:00:00.140) 0:01:09.148 *********** 2025-06-22 20:06:48.149455 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:06:48.149465 | orchestrator | 2025-06-22 20:06:48.149476 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-06-22 20:06:48.149488 | orchestrator | 2025-06-22 20:06:48.149498 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-06-22 20:06:48.149509 | orchestrator | Sunday 22 June 2025 20:06:32 +0000 (0:00:11.604) 0:01:20.753 *********** 2025-06-22 20:06:48.149520 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:06:48.149531 | orchestrator | 2025-06-22 20:06:48.149543 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-06-22 20:06:48.149554 | orchestrator | 2025-06-22 20:06:48.149571 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-06-22 20:06:48.149583 | orchestrator | Sunday 22 June 2025 20:06:33 +0000 (0:00:01.285) 0:01:22.038 *********** 2025-06-22 20:06:48.149593 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:06:48.149604 | orchestrator | 2025-06-22 20:06:48.149615 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 20:06:48.149626 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-06-22 20:06:48.149637 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 20:06:48.149648 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 20:06:48.149659 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 20:06:48.149670 | orchestrator | 2025-06-22 20:06:48.149681 | orchestrator | 2025-06-22 20:06:48.149692 | orchestrator | 2025-06-22 20:06:48.149703 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 20:06:48.149714 | orchestrator | Sunday 22 June 2025 20:06:44 +0000 (0:00:11.215) 0:01:33.253 *********** 2025-06-22 20:06:48.149725 | orchestrator | =============================================================================== 2025-06-22 20:06:48.149736 | orchestrator | Create admin user ------------------------------------------------------ 58.49s 2025-06-22 20:06:48.149747 | orchestrator | Restart ceph manager service ------------------------------------------- 24.11s 2025-06-22 20:06:48.149758 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.05s 2025-06-22 20:06:48.149768 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.99s 2025-06-22 20:06:48.149779 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.13s 2025-06-22 20:06:48.149790 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.13s 2025-06-22 20:06:48.149801 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.02s 2025-06-22 20:06:48.149811 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.01s 2025-06-22 20:06:48.149822 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 0.97s 2025-06-22 20:06:48.149833 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 0.91s 2025-06-22 20:06:48.149844 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.14s 2025-06-22 20:06:48.149944 | orchestrator | 2025-06-22 20:06:48 | INFO  | Task 020077a1-37de-45b7-92d8-ea80594119de is in state STARTED 2025-06-22 20:06:48.149966 | orchestrator | 2025-06-22 20:06:48 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:06:51.187690 | orchestrator | 2025-06-22 20:06:51 | INFO  | Task e5c0f822-ce6c-444c-a2bf-938e94a4477c is in state STARTED 2025-06-22 20:06:51.187774 | orchestrator | 2025-06-22 20:06:51 | INFO  | Task cf53afaf-4f09-4b31-b2a3-9e2430ebb015 is in state SUCCESS 2025-06-22 20:06:51.188731 | orchestrator | 2025-06-22 20:06:51 | INFO  | Task 4a0f0d72-b0bb-4180-a478-51f31e6edf32 is in state STARTED 2025-06-22 20:06:51.189265 | orchestrator | 2025-06-22 20:06:51 | INFO  | Task 1fe1d1a9-d01a-4de9-87ea-ee48d37d85d0 is in state STARTED 2025-06-22 20:06:51.189868 | orchestrator | 2025-06-22 20:06:51 | INFO  | Task 020077a1-37de-45b7-92d8-ea80594119de is in state STARTED 2025-06-22 20:06:51.189892 | orchestrator | 2025-06-22 20:06:51 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:06:54.208009 | orchestrator | 2025-06-22 20:06:54 | INFO  | Task e5c0f822-ce6c-444c-a2bf-938e94a4477c is in state STARTED 2025-06-22 20:06:54.208172 | orchestrator | 2025-06-22 20:06:54 | INFO  | Task 4a0f0d72-b0bb-4180-a478-51f31e6edf32 is in state STARTED 2025-06-22 20:06:54.208654 | orchestrator | 2025-06-22 20:06:54 | INFO  | Task 1fe1d1a9-d01a-4de9-87ea-ee48d37d85d0 is in state STARTED 2025-06-22 20:06:54.214227 | orchestrator | 2025-06-22 20:06:54 | INFO  | Task 020077a1-37de-45b7-92d8-ea80594119de is in state STARTED 2025-06-22 20:06:54.214291 | orchestrator | 2025-06-22 20:06:54 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:06:57.249262 | orchestrator | 2025-06-22 20:06:57 | INFO  | Task e5c0f822-ce6c-444c-a2bf-938e94a4477c is in state STARTED 2025-06-22 20:06:57.250196 | orchestrator | 2025-06-22 20:06:57 | INFO  | Task 4a0f0d72-b0bb-4180-a478-51f31e6edf32 is in state STARTED 2025-06-22 20:06:57.250227 | orchestrator | 2025-06-22 20:06:57 | INFO  | Task 1fe1d1a9-d01a-4de9-87ea-ee48d37d85d0 is in state STARTED 2025-06-22 20:06:57.250831 | orchestrator | 2025-06-22 20:06:57 | INFO  | Task 020077a1-37de-45b7-92d8-ea80594119de is in state STARTED 2025-06-22 20:06:57.250901 | orchestrator | 2025-06-22 20:06:57 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:07:00.285086 | orchestrator | 2025-06-22 20:07:00 | INFO  | Task e5c0f822-ce6c-444c-a2bf-938e94a4477c is in state STARTED 2025-06-22 20:07:00.286226 | orchestrator | 2025-06-22 20:07:00 | INFO  | Task 4a0f0d72-b0bb-4180-a478-51f31e6edf32 is in state STARTED 2025-06-22 20:07:00.288449 | orchestrator | 2025-06-22 20:07:00 | INFO  | Task 1fe1d1a9-d01a-4de9-87ea-ee48d37d85d0 is in state STARTED 2025-06-22 20:07:00.288936 | orchestrator | 2025-06-22 20:07:00 | INFO  | Task 020077a1-37de-45b7-92d8-ea80594119de is in state STARTED 2025-06-22 20:07:00.288963 | orchestrator | 2025-06-22 20:07:00 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:07:03.322620 | orchestrator | 2025-06-22 20:07:03 | INFO  | Task e5c0f822-ce6c-444c-a2bf-938e94a4477c is in state STARTED 2025-06-22 20:07:03.322706 | orchestrator | 2025-06-22 20:07:03 | INFO  | Task 4a0f0d72-b0bb-4180-a478-51f31e6edf32 is in state STARTED 2025-06-22 20:07:03.324510 | orchestrator | 2025-06-22 20:07:03 | INFO  | Task 1fe1d1a9-d01a-4de9-87ea-ee48d37d85d0 is in state STARTED 2025-06-22 20:07:03.325068 | orchestrator | 2025-06-22 20:07:03 | INFO  | Task 020077a1-37de-45b7-92d8-ea80594119de is in state STARTED 2025-06-22 20:07:03.325167 | orchestrator | 2025-06-22 20:07:03 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:07:06.352476 | orchestrator | 2025-06-22 20:07:06 | INFO  | Task e5c0f822-ce6c-444c-a2bf-938e94a4477c is in state STARTED 2025-06-22 20:07:06.352862 | orchestrator | 2025-06-22 20:07:06 | INFO  | Task 4a0f0d72-b0bb-4180-a478-51f31e6edf32 is in state STARTED 2025-06-22 20:07:06.353545 | orchestrator | 2025-06-22 20:07:06 | INFO  | Task 1fe1d1a9-d01a-4de9-87ea-ee48d37d85d0 is in state STARTED 2025-06-22 20:07:06.354242 | orchestrator | 2025-06-22 20:07:06 | INFO  | Task 020077a1-37de-45b7-92d8-ea80594119de is in state STARTED 2025-06-22 20:07:06.354310 | orchestrator | 2025-06-22 20:07:06 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:07:09.389617 | orchestrator | 2025-06-22 20:07:09 | INFO  | Task e5c0f822-ce6c-444c-a2bf-938e94a4477c is in state STARTED 2025-06-22 20:07:09.390277 | orchestrator | 2025-06-22 20:07:09 | INFO  | Task 4a0f0d72-b0bb-4180-a478-51f31e6edf32 is in state STARTED 2025-06-22 20:07:09.393582 | orchestrator | 2025-06-22 20:07:09 | INFO  | Task 1fe1d1a9-d01a-4de9-87ea-ee48d37d85d0 is in state STARTED 2025-06-22 20:07:09.396176 | orchestrator | 2025-06-22 20:07:09 | INFO  | Task 020077a1-37de-45b7-92d8-ea80594119de is in state STARTED 2025-06-22 20:07:09.396833 | orchestrator | 2025-06-22 20:07:09 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:07:12.438425 | orchestrator | 2025-06-22 20:07:12 | INFO  | Task e5c0f822-ce6c-444c-a2bf-938e94a4477c is in state STARTED 2025-06-22 20:07:12.439436 | orchestrator | 2025-06-22 20:07:12 | INFO  | Task 4a0f0d72-b0bb-4180-a478-51f31e6edf32 is in state STARTED 2025-06-22 20:07:12.440453 | orchestrator | 2025-06-22 20:07:12 | INFO  | Task 1fe1d1a9-d01a-4de9-87ea-ee48d37d85d0 is in state STARTED 2025-06-22 20:07:12.441730 | orchestrator | 2025-06-22 20:07:12 | INFO  | Task 020077a1-37de-45b7-92d8-ea80594119de is in state STARTED 2025-06-22 20:07:12.441858 | orchestrator | 2025-06-22 20:07:12 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:07:15.483938 | orchestrator | 2025-06-22 20:07:15 | INFO  | Task e5c0f822-ce6c-444c-a2bf-938e94a4477c is in state STARTED 2025-06-22 20:07:15.485803 | orchestrator | 2025-06-22 20:07:15 | INFO  | Task 4a0f0d72-b0bb-4180-a478-51f31e6edf32 is in state STARTED 2025-06-22 20:07:15.487365 | orchestrator | 2025-06-22 20:07:15 | INFO  | Task 1fe1d1a9-d01a-4de9-87ea-ee48d37d85d0 is in state STARTED 2025-06-22 20:07:15.489213 | orchestrator | 2025-06-22 20:07:15 | INFO  | Task 020077a1-37de-45b7-92d8-ea80594119de is in state SUCCESS 2025-06-22 20:07:15.490480 | orchestrator | 2025-06-22 20:07:15.490515 | orchestrator | None 2025-06-22 20:07:15.490526 | orchestrator | 2025-06-22 20:07:15.490551 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 20:07:15.490562 | orchestrator | 2025-06-22 20:07:15.490572 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 20:07:15.490583 | orchestrator | Sunday 22 June 2025 20:05:08 +0000 (0:00:00.282) 0:00:00.282 *********** 2025-06-22 20:07:15.490592 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:07:15.490603 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:07:15.490613 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:07:15.490622 | orchestrator | 2025-06-22 20:07:15.490632 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 20:07:15.490641 | orchestrator | Sunday 22 June 2025 20:05:08 +0000 (0:00:00.418) 0:00:00.700 *********** 2025-06-22 20:07:15.490650 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-06-22 20:07:15.490688 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-06-22 20:07:15.490697 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-06-22 20:07:15.490706 | orchestrator | 2025-06-22 20:07:15.490715 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-06-22 20:07:15.490724 | orchestrator | 2025-06-22 20:07:15.490776 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-06-22 20:07:15.490786 | orchestrator | Sunday 22 June 2025 20:05:09 +0000 (0:00:00.559) 0:00:01.260 *********** 2025-06-22 20:07:15.490817 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:07:15.490827 | orchestrator | 2025-06-22 20:07:15.490836 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-06-22 20:07:15.490845 | orchestrator | Sunday 22 June 2025 20:05:10 +0000 (0:00:00.735) 0:00:01.996 *********** 2025-06-22 20:07:15.490855 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-06-22 20:07:15.490863 | orchestrator | 2025-06-22 20:07:15.490872 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-06-22 20:07:15.490881 | orchestrator | Sunday 22 June 2025 20:05:13 +0000 (0:00:03.554) 0:00:05.551 *********** 2025-06-22 20:07:15.490890 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-06-22 20:07:15.490899 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-06-22 20:07:15.490908 | orchestrator | 2025-06-22 20:07:15.490916 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-06-22 20:07:15.490925 | orchestrator | Sunday 22 June 2025 20:05:20 +0000 (0:00:07.226) 0:00:12.778 *********** 2025-06-22 20:07:15.490934 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-22 20:07:15.490943 | orchestrator | 2025-06-22 20:07:15.490952 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-06-22 20:07:15.490961 | orchestrator | Sunday 22 June 2025 20:05:24 +0000 (0:00:03.432) 0:00:16.210 *********** 2025-06-22 20:07:15.490969 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-22 20:07:15.490978 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-06-22 20:07:15.490987 | orchestrator | 2025-06-22 20:07:15.490996 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-06-22 20:07:15.491005 | orchestrator | Sunday 22 June 2025 20:05:28 +0000 (0:00:03.999) 0:00:20.210 *********** 2025-06-22 20:07:15.491014 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-22 20:07:15.491022 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-06-22 20:07:15.491031 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-06-22 20:07:15.491063 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-06-22 20:07:15.491072 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-06-22 20:07:15.491081 | orchestrator | 2025-06-22 20:07:15.491090 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-06-22 20:07:15.491100 | orchestrator | Sunday 22 June 2025 20:05:45 +0000 (0:00:16.866) 0:00:37.076 *********** 2025-06-22 20:07:15.491110 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-06-22 20:07:15.491120 | orchestrator | 2025-06-22 20:07:15.491130 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-06-22 20:07:15.491140 | orchestrator | Sunday 22 June 2025 20:05:49 +0000 (0:00:04.408) 0:00:41.484 *********** 2025-06-22 20:07:15.491154 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-22 20:07:15.491192 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-22 20:07:15.491203 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-22 20:07:15.491214 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-22 20:07:15.491225 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-22 20:07:15.491234 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-22 20:07:15.491255 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:07:15.491272 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:07:15.491281 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:07:15.491290 | orchestrator | 2025-06-22 20:07:15.491299 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-06-22 20:07:15.491308 | orchestrator | Sunday 22 June 2025 20:05:51 +0000 (0:00:01.969) 0:00:43.453 *********** 2025-06-22 20:07:15.491317 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-06-22 20:07:15.491326 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-06-22 20:07:15.491335 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-06-22 20:07:15.491343 | orchestrator | 2025-06-22 20:07:15.491352 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-06-22 20:07:15.491361 | orchestrator | Sunday 22 June 2025 20:05:52 +0000 (0:00:01.261) 0:00:44.715 *********** 2025-06-22 20:07:15.491370 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:07:15.491379 | orchestrator | 2025-06-22 20:07:15.491387 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-06-22 20:07:15.491396 | orchestrator | Sunday 22 June 2025 20:05:52 +0000 (0:00:00.128) 0:00:44.843 *********** 2025-06-22 20:07:15.491405 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:07:15.491414 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:07:15.491422 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:07:15.491431 | orchestrator | 2025-06-22 20:07:15.491440 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-06-22 20:07:15.491448 | orchestrator | Sunday 22 June 2025 20:05:53 +0000 (0:00:00.504) 0:00:45.348 *********** 2025-06-22 20:07:15.491457 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:07:15.491466 | orchestrator | 2025-06-22 20:07:15.491475 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-06-22 20:07:15.491483 | orchestrator | Sunday 22 June 2025 20:05:54 +0000 (0:00:00.522) 0:00:45.870 *********** 2025-06-22 20:07:15.491493 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-22 20:07:15.491518 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-22 20:07:15.491528 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-22 20:07:15.491537 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-22 20:07:15.491547 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-22 20:07:15.491556 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-22 20:07:15.491576 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:07:15.491589 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:07:15.491599 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:07:15.491608 | orchestrator | 2025-06-22 20:07:15.491617 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-06-22 20:07:15.491626 | orchestrator | Sunday 22 June 2025 20:05:57 +0000 (0:00:03.929) 0:00:49.801 *********** 2025-06-22 20:07:15.491635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-22 20:07:15.491644 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-22 20:07:15.491660 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-22 20:07:15.491669 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:07:15.491688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-22 20:07:15.491698 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-22 20:07:15.491707 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-22 20:07:15.491716 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:07:15.491726 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-22 20:07:15.491740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-22 20:07:15.491749 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-22 20:07:15.491758 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:07:15.491767 | orchestrator | 2025-06-22 20:07:15.491789 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-06-22 20:07:15.491799 | orchestrator | Sunday 22 June 2025 20:05:59 +0000 (0:00:01.690) 0:00:51.492 *********** 2025-06-22 20:07:15.491808 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-22 20:07:15.491817 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-22 20:07:15.491826 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-22 20:07:15.491844 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:07:15.491853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-22 20:07:15.491863 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-22 20:07:15.491881 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-22 20:07:15.491891 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:07:15.491900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-22 20:07:15.491910 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-22 20:07:15.491924 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-22 20:07:15.491933 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:07:15.491941 | orchestrator | 2025-06-22 20:07:15.491950 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-06-22 20:07:15.491959 | orchestrator | Sunday 22 June 2025 20:06:01 +0000 (0:00:01.564) 0:00:53.056 *********** 2025-06-22 20:07:15.491968 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-22 20:07:15.492224 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-22 20:07:15.492241 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-22 20:07:15.492251 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-22 20:07:15.492267 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-22 20:07:15.492277 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-22 20:07:15.492296 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:07:15.492306 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:07:15.492315 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:07:15.492325 | orchestrator | 2025-06-22 20:07:15.492334 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-06-22 20:07:15.492343 | orchestrator | Sunday 22 June 2025 20:06:05 +0000 (0:00:04.230) 0:00:57.287 *********** 2025-06-22 20:07:15.492356 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:07:15.492365 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:07:15.492374 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:07:15.492383 | orchestrator | 2025-06-22 20:07:15.492392 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-06-22 20:07:15.492400 | orchestrator | Sunday 22 June 2025 20:06:07 +0000 (0:00:02.384) 0:00:59.671 *********** 2025-06-22 20:07:15.492409 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-22 20:07:15.492418 | orchestrator | 2025-06-22 20:07:15.492427 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-06-22 20:07:15.492435 | orchestrator | Sunday 22 June 2025 20:06:09 +0000 (0:00:01.992) 0:01:01.664 *********** 2025-06-22 20:07:15.492444 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:07:15.492453 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:07:15.492462 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:07:15.492471 | orchestrator | 2025-06-22 20:07:15.492482 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-06-22 20:07:15.492493 | orchestrator | Sunday 22 June 2025 20:06:10 +0000 (0:00:00.834) 0:01:02.499 *********** 2025-06-22 20:07:15.492504 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-22 20:07:15.492526 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-22 20:07:15.492539 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-22 20:07:15.492558 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-22 20:07:15.492569 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-22 20:07:15.492581 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-22 20:07:15.492592 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:07:15.492614 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:07:15.492627 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:07:15.492644 | orchestrator | 2025-06-22 20:07:15.492655 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-06-22 20:07:15.492667 | orchestrator | Sunday 22 June 2025 20:06:20 +0000 (0:00:09.912) 0:01:12.411 *********** 2025-06-22 20:07:15.492678 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-22 20:07:15.492690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-22 20:07:15.492702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-22 20:07:15.492713 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-22 20:07:15.492734 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:07:15.492747 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-22 20:07:15.492765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-22 20:07:15.492776 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:07:15.492788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-06-22 20:07:15.492800 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-06-22 20:07:15.492811 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-06-22 20:07:15.492822 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:07:15.492835 | orchestrator | 2025-06-22 20:07:15.492848 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-06-22 20:07:15.492860 | orchestrator | Sunday 22 June 2025 20:06:22 +0000 (0:00:01.581) 0:01:13.992 *********** 2025-06-22 20:07:15.492885 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-22 20:07:15.492910 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-22 20:07:15.492924 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-06-22 20:07:15.492937 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-22 20:07:15.492950 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-22 20:07:15.492974 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-06-22 20:07:15.492994 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:07:15.493007 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:07:15.493020 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:07:15.493072 | orchestrator | 2025-06-22 20:07:15.493088 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-06-22 20:07:15.493100 | orchestrator | Sunday 22 June 2025 20:06:25 +0000 (0:00:03.502) 0:01:17.495 *********** 2025-06-22 20:07:15.493113 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:07:15.493124 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:07:15.493135 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:07:15.493146 | orchestrator | 2025-06-22 20:07:15.493157 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-06-22 20:07:15.493168 | orchestrator | Sunday 22 June 2025 20:06:25 +0000 (0:00:00.355) 0:01:17.850 *********** 2025-06-22 20:07:15.493179 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:07:15.493190 | orchestrator | 2025-06-22 20:07:15.493201 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-06-22 20:07:15.493212 | orchestrator | Sunday 22 June 2025 20:06:28 +0000 (0:00:02.136) 0:01:19.986 *********** 2025-06-22 20:07:15.493223 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:07:15.493234 | orchestrator | 2025-06-22 20:07:15.493245 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-06-22 20:07:15.493256 | orchestrator | Sunday 22 June 2025 20:06:30 +0000 (0:00:02.259) 0:01:22.246 *********** 2025-06-22 20:07:15.493267 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:07:15.493277 | orchestrator | 2025-06-22 20:07:15.493288 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-06-22 20:07:15.493299 | orchestrator | Sunday 22 June 2025 20:06:43 +0000 (0:00:12.678) 0:01:34.925 *********** 2025-06-22 20:07:15.493310 | orchestrator | 2025-06-22 20:07:15.493321 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-06-22 20:07:15.493332 | orchestrator | Sunday 22 June 2025 20:06:43 +0000 (0:00:00.062) 0:01:34.988 *********** 2025-06-22 20:07:15.493350 | orchestrator | 2025-06-22 20:07:15.493361 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-06-22 20:07:15.493372 | orchestrator | Sunday 22 June 2025 20:06:43 +0000 (0:00:00.058) 0:01:35.046 *********** 2025-06-22 20:07:15.493383 | orchestrator | 2025-06-22 20:07:15.493394 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-06-22 20:07:15.493404 | orchestrator | Sunday 22 June 2025 20:06:43 +0000 (0:00:00.097) 0:01:35.144 *********** 2025-06-22 20:07:15.493415 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:07:15.493426 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:07:15.493437 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:07:15.493448 | orchestrator | 2025-06-22 20:07:15.493459 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-06-22 20:07:15.493470 | orchestrator | Sunday 22 June 2025 20:06:53 +0000 (0:00:09.969) 0:01:45.113 *********** 2025-06-22 20:07:15.493481 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:07:15.493492 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:07:15.493509 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:07:15.493520 | orchestrator | 2025-06-22 20:07:15.493536 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-06-22 20:07:15.493547 | orchestrator | Sunday 22 June 2025 20:07:02 +0000 (0:00:09.722) 0:01:54.835 *********** 2025-06-22 20:07:15.493558 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:07:15.493569 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:07:15.493580 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:07:15.493590 | orchestrator | 2025-06-22 20:07:15.493601 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 20:07:15.493613 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-06-22 20:07:15.493625 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-22 20:07:15.493636 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-22 20:07:15.493647 | orchestrator | 2025-06-22 20:07:15.493658 | orchestrator | 2025-06-22 20:07:15.493669 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 20:07:15.493680 | orchestrator | Sunday 22 June 2025 20:07:14 +0000 (0:00:11.594) 0:02:06.430 *********** 2025-06-22 20:07:15.493691 | orchestrator | =============================================================================== 2025-06-22 20:07:15.493702 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 16.87s 2025-06-22 20:07:15.493713 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 12.68s 2025-06-22 20:07:15.493724 | orchestrator | barbican : Restart barbican-worker container --------------------------- 11.59s 2025-06-22 20:07:15.493734 | orchestrator | barbican : Restart barbican-api container ------------------------------- 9.97s 2025-06-22 20:07:15.493745 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 9.91s 2025-06-22 20:07:15.493756 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 9.72s 2025-06-22 20:07:15.493767 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 7.23s 2025-06-22 20:07:15.493778 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.41s 2025-06-22 20:07:15.493788 | orchestrator | barbican : Copying over config.json files for services ------------------ 4.23s 2025-06-22 20:07:15.493799 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.00s 2025-06-22 20:07:15.493810 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.93s 2025-06-22 20:07:15.493821 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.56s 2025-06-22 20:07:15.493832 | orchestrator | barbican : Check barbican containers ------------------------------------ 3.50s 2025-06-22 20:07:15.493849 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.43s 2025-06-22 20:07:15.493860 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.38s 2025-06-22 20:07:15.493871 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.26s 2025-06-22 20:07:15.493882 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.14s 2025-06-22 20:07:15.493893 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 1.99s 2025-06-22 20:07:15.493903 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 1.97s 2025-06-22 20:07:15.493915 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS certificate --- 1.69s 2025-06-22 20:07:15.493960 | orchestrator | 2025-06-22 20:07:15 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:07:18.524110 | orchestrator | 2025-06-22 20:07:18 | INFO  | Task e5c0f822-ce6c-444c-a2bf-938e94a4477c is in state STARTED 2025-06-22 20:07:18.524333 | orchestrator | 2025-06-22 20:07:18 | INFO  | Task 4a0f0d72-b0bb-4180-a478-51f31e6edf32 is in state STARTED 2025-06-22 20:07:18.525347 | orchestrator | 2025-06-22 20:07:18 | INFO  | Task 30f90179-d050-4e0b-8490-889c04ba6800 is in state STARTED 2025-06-22 20:07:18.527281 | orchestrator | 2025-06-22 20:07:18 | INFO  | Task 1fe1d1a9-d01a-4de9-87ea-ee48d37d85d0 is in state STARTED 2025-06-22 20:07:18.527312 | orchestrator | 2025-06-22 20:07:18 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:07:21.557579 | orchestrator | 2025-06-22 20:07:21 | INFO  | Task e5c0f822-ce6c-444c-a2bf-938e94a4477c is in state STARTED 2025-06-22 20:07:21.558249 | orchestrator | 2025-06-22 20:07:21 | INFO  | Task 4a0f0d72-b0bb-4180-a478-51f31e6edf32 is in state STARTED 2025-06-22 20:07:21.559096 | orchestrator | 2025-06-22 20:07:21 | INFO  | Task 30f90179-d050-4e0b-8490-889c04ba6800 is in state STARTED 2025-06-22 20:07:21.559861 | orchestrator | 2025-06-22 20:07:21 | INFO  | Task 1fe1d1a9-d01a-4de9-87ea-ee48d37d85d0 is in state STARTED 2025-06-22 20:07:21.560102 | orchestrator | 2025-06-22 20:07:21 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:07:24.592558 | orchestrator | 2025-06-22 20:07:24 | INFO  | Task e5c0f822-ce6c-444c-a2bf-938e94a4477c is in state STARTED 2025-06-22 20:07:24.592634 | orchestrator | 2025-06-22 20:07:24 | INFO  | Task 4a0f0d72-b0bb-4180-a478-51f31e6edf32 is in state STARTED 2025-06-22 20:07:24.592644 | orchestrator | 2025-06-22 20:07:24 | INFO  | Task 30f90179-d050-4e0b-8490-889c04ba6800 is in state STARTED 2025-06-22 20:07:24.592651 | orchestrator | 2025-06-22 20:07:24 | INFO  | Task 1fe1d1a9-d01a-4de9-87ea-ee48d37d85d0 is in state STARTED 2025-06-22 20:07:24.592657 | orchestrator | 2025-06-22 20:07:24 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:07:27.632288 | orchestrator | 2025-06-22 20:07:27 | INFO  | Task e5c0f822-ce6c-444c-a2bf-938e94a4477c is in state STARTED 2025-06-22 20:07:27.633488 | orchestrator | 2025-06-22 20:07:27 | INFO  | Task 4a0f0d72-b0bb-4180-a478-51f31e6edf32 is in state STARTED 2025-06-22 20:07:27.634870 | orchestrator | 2025-06-22 20:07:27 | INFO  | Task 30f90179-d050-4e0b-8490-889c04ba6800 is in state STARTED 2025-06-22 20:07:27.637019 | orchestrator | 2025-06-22 20:07:27 | INFO  | Task 1fe1d1a9-d01a-4de9-87ea-ee48d37d85d0 is in state STARTED 2025-06-22 20:07:27.637070 | orchestrator | 2025-06-22 20:07:27 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:07:30.675281 | orchestrator | 2025-06-22 20:07:30 | INFO  | Task e5c0f822-ce6c-444c-a2bf-938e94a4477c is in state STARTED 2025-06-22 20:07:30.675368 | orchestrator | 2025-06-22 20:07:30 | INFO  | Task 4a0f0d72-b0bb-4180-a478-51f31e6edf32 is in state STARTED 2025-06-22 20:07:30.676083 | orchestrator | 2025-06-22 20:07:30 | INFO  | Task 30f90179-d050-4e0b-8490-889c04ba6800 is in state STARTED 2025-06-22 20:07:30.678786 | orchestrator | 2025-06-22 20:07:30 | INFO  | Task 1fe1d1a9-d01a-4de9-87ea-ee48d37d85d0 is in state STARTED 2025-06-22 20:07:30.678810 | orchestrator | 2025-06-22 20:07:30 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:07:33.711250 | orchestrator | 2025-06-22 20:07:33 | INFO  | Task e5c0f822-ce6c-444c-a2bf-938e94a4477c is in state STARTED 2025-06-22 20:07:33.711548 | orchestrator | 2025-06-22 20:07:33 | INFO  | Task 4a0f0d72-b0bb-4180-a478-51f31e6edf32 is in state STARTED 2025-06-22 20:07:33.712165 | orchestrator | 2025-06-22 20:07:33 | INFO  | Task 30f90179-d050-4e0b-8490-889c04ba6800 is in state STARTED 2025-06-22 20:07:33.712936 | orchestrator | 2025-06-22 20:07:33 | INFO  | Task 1fe1d1a9-d01a-4de9-87ea-ee48d37d85d0 is in state STARTED 2025-06-22 20:07:33.712958 | orchestrator | 2025-06-22 20:07:33 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:07:36.757077 | orchestrator | 2025-06-22 20:07:36 | INFO  | Task e5c0f822-ce6c-444c-a2bf-938e94a4477c is in state STARTED 2025-06-22 20:07:36.759956 | orchestrator | 2025-06-22 20:07:36 | INFO  | Task 4a0f0d72-b0bb-4180-a478-51f31e6edf32 is in state STARTED 2025-06-22 20:07:36.762266 | orchestrator | 2025-06-22 20:07:36 | INFO  | Task 30f90179-d050-4e0b-8490-889c04ba6800 is in state STARTED 2025-06-22 20:07:36.763900 | orchestrator | 2025-06-22 20:07:36 | INFO  | Task 1fe1d1a9-d01a-4de9-87ea-ee48d37d85d0 is in state STARTED 2025-06-22 20:07:36.763924 | orchestrator | 2025-06-22 20:07:36 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:07:39.811866 | orchestrator | 2025-06-22 20:07:39 | INFO  | Task e5c0f822-ce6c-444c-a2bf-938e94a4477c is in state STARTED 2025-06-22 20:07:39.813507 | orchestrator | 2025-06-22 20:07:39 | INFO  | Task 4a0f0d72-b0bb-4180-a478-51f31e6edf32 is in state STARTED 2025-06-22 20:07:39.816613 | orchestrator | 2025-06-22 20:07:39 | INFO  | Task 30f90179-d050-4e0b-8490-889c04ba6800 is in state STARTED 2025-06-22 20:07:39.818236 | orchestrator | 2025-06-22 20:07:39 | INFO  | Task 1fe1d1a9-d01a-4de9-87ea-ee48d37d85d0 is in state STARTED 2025-06-22 20:07:39.818260 | orchestrator | 2025-06-22 20:07:39 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:07:42.867369 | orchestrator | 2025-06-22 20:07:42 | INFO  | Task e5c0f822-ce6c-444c-a2bf-938e94a4477c is in state STARTED 2025-06-22 20:07:42.870979 | orchestrator | 2025-06-22 20:07:42 | INFO  | Task 4a0f0d72-b0bb-4180-a478-51f31e6edf32 is in state STARTED 2025-06-22 20:07:42.872847 | orchestrator | 2025-06-22 20:07:42 | INFO  | Task 30f90179-d050-4e0b-8490-889c04ba6800 is in state STARTED 2025-06-22 20:07:42.874468 | orchestrator | 2025-06-22 20:07:42 | INFO  | Task 1fe1d1a9-d01a-4de9-87ea-ee48d37d85d0 is in state STARTED 2025-06-22 20:07:42.874499 | orchestrator | 2025-06-22 20:07:42 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:07:45.906194 | orchestrator | 2025-06-22 20:07:45 | INFO  | Task e5c0f822-ce6c-444c-a2bf-938e94a4477c is in state STARTED 2025-06-22 20:07:45.906311 | orchestrator | 2025-06-22 20:07:45 | INFO  | Task 4a0f0d72-b0bb-4180-a478-51f31e6edf32 is in state STARTED 2025-06-22 20:07:45.907818 | orchestrator | 2025-06-22 20:07:45 | INFO  | Task 30f90179-d050-4e0b-8490-889c04ba6800 is in state STARTED 2025-06-22 20:07:45.908428 | orchestrator | 2025-06-22 20:07:45 | INFO  | Task 1fe1d1a9-d01a-4de9-87ea-ee48d37d85d0 is in state STARTED 2025-06-22 20:07:45.908444 | orchestrator | 2025-06-22 20:07:45 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:07:48.953733 | orchestrator | 2025-06-22 20:07:48 | INFO  | Task e5c0f822-ce6c-444c-a2bf-938e94a4477c is in state STARTED 2025-06-22 20:07:48.956270 | orchestrator | 2025-06-22 20:07:48 | INFO  | Task 4a0f0d72-b0bb-4180-a478-51f31e6edf32 is in state STARTED 2025-06-22 20:07:48.956302 | orchestrator | 2025-06-22 20:07:48 | INFO  | Task 30f90179-d050-4e0b-8490-889c04ba6800 is in state STARTED 2025-06-22 20:07:48.956314 | orchestrator | 2025-06-22 20:07:48 | INFO  | Task 1fe1d1a9-d01a-4de9-87ea-ee48d37d85d0 is in state STARTED 2025-06-22 20:07:48.956326 | orchestrator | 2025-06-22 20:07:48 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:07:51.987214 | orchestrator | 2025-06-22 20:07:51 | INFO  | Task e5c0f822-ce6c-444c-a2bf-938e94a4477c is in state STARTED 2025-06-22 20:07:51.989236 | orchestrator | 2025-06-22 20:07:51 | INFO  | Task 4a0f0d72-b0bb-4180-a478-51f31e6edf32 is in state STARTED 2025-06-22 20:07:51.991124 | orchestrator | 2025-06-22 20:07:51 | INFO  | Task 30f90179-d050-4e0b-8490-889c04ba6800 is in state STARTED 2025-06-22 20:07:51.992599 | orchestrator | 2025-06-22 20:07:51 | INFO  | Task 1fe1d1a9-d01a-4de9-87ea-ee48d37d85d0 is in state STARTED 2025-06-22 20:07:51.992704 | orchestrator | 2025-06-22 20:07:51 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:07:55.033508 | orchestrator | 2025-06-22 20:07:55 | INFO  | Task e5c0f822-ce6c-444c-a2bf-938e94a4477c is in state STARTED 2025-06-22 20:07:55.033856 | orchestrator | 2025-06-22 20:07:55 | INFO  | Task 4a0f0d72-b0bb-4180-a478-51f31e6edf32 is in state STARTED 2025-06-22 20:07:55.034837 | orchestrator | 2025-06-22 20:07:55 | INFO  | Task 30f90179-d050-4e0b-8490-889c04ba6800 is in state STARTED 2025-06-22 20:07:55.035421 | orchestrator | 2025-06-22 20:07:55 | INFO  | Task 1fe1d1a9-d01a-4de9-87ea-ee48d37d85d0 is in state STARTED 2025-06-22 20:07:55.035528 | orchestrator | 2025-06-22 20:07:55 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:07:58.089693 | orchestrator | 2025-06-22 20:07:58 | INFO  | Task e5c0f822-ce6c-444c-a2bf-938e94a4477c is in state STARTED 2025-06-22 20:07:58.091300 | orchestrator | 2025-06-22 20:07:58 | INFO  | Task 4a0f0d72-b0bb-4180-a478-51f31e6edf32 is in state STARTED 2025-06-22 20:07:58.093815 | orchestrator | 2025-06-22 20:07:58 | INFO  | Task 30f90179-d050-4e0b-8490-889c04ba6800 is in state STARTED 2025-06-22 20:07:58.095621 | orchestrator | 2025-06-22 20:07:58 | INFO  | Task 1fe1d1a9-d01a-4de9-87ea-ee48d37d85d0 is in state STARTED 2025-06-22 20:07:58.095648 | orchestrator | 2025-06-22 20:07:58 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:08:01.135675 | orchestrator | 2025-06-22 20:08:01 | INFO  | Task e5c0f822-ce6c-444c-a2bf-938e94a4477c is in state STARTED 2025-06-22 20:08:01.135868 | orchestrator | 2025-06-22 20:08:01 | INFO  | Task 4a0f0d72-b0bb-4180-a478-51f31e6edf32 is in state STARTED 2025-06-22 20:08:01.139016 | orchestrator | 2025-06-22 20:08:01 | INFO  | Task 30f90179-d050-4e0b-8490-889c04ba6800 is in state STARTED 2025-06-22 20:08:01.140967 | orchestrator | 2025-06-22 20:08:01 | INFO  | Task 1fe1d1a9-d01a-4de9-87ea-ee48d37d85d0 is in state STARTED 2025-06-22 20:08:01.141357 | orchestrator | 2025-06-22 20:08:01 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:08:04.184699 | orchestrator | 2025-06-22 20:08:04 | INFO  | Task e5c0f822-ce6c-444c-a2bf-938e94a4477c is in state STARTED 2025-06-22 20:08:04.186381 | orchestrator | 2025-06-22 20:08:04 | INFO  | Task 4a0f0d72-b0bb-4180-a478-51f31e6edf32 is in state STARTED 2025-06-22 20:08:04.187736 | orchestrator | 2025-06-22 20:08:04 | INFO  | Task 30f90179-d050-4e0b-8490-889c04ba6800 is in state STARTED 2025-06-22 20:08:04.189186 | orchestrator | 2025-06-22 20:08:04 | INFO  | Task 1fe1d1a9-d01a-4de9-87ea-ee48d37d85d0 is in state STARTED 2025-06-22 20:08:04.189242 | orchestrator | 2025-06-22 20:08:04 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:08:07.233910 | orchestrator | 2025-06-22 20:08:07 | INFO  | Task e5c0f822-ce6c-444c-a2bf-938e94a4477c is in state STARTED 2025-06-22 20:08:07.235610 | orchestrator | 2025-06-22 20:08:07 | INFO  | Task 4a0f0d72-b0bb-4180-a478-51f31e6edf32 is in state STARTED 2025-06-22 20:08:07.238217 | orchestrator | 2025-06-22 20:08:07 | INFO  | Task 30f90179-d050-4e0b-8490-889c04ba6800 is in state STARTED 2025-06-22 20:08:07.239595 | orchestrator | 2025-06-22 20:08:07 | INFO  | Task 1fe1d1a9-d01a-4de9-87ea-ee48d37d85d0 is in state STARTED 2025-06-22 20:08:07.239621 | orchestrator | 2025-06-22 20:08:07 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:08:10.291444 | orchestrator | 2025-06-22 20:08:10 | INFO  | Task e5c0f822-ce6c-444c-a2bf-938e94a4477c is in state SUCCESS 2025-06-22 20:08:10.293796 | orchestrator | 2025-06-22 20:08:10.293900 | orchestrator | 2025-06-22 20:08:10.293920 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 20:08:10.293934 | orchestrator | 2025-06-22 20:08:10.293945 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 20:08:10.293957 | orchestrator | Sunday 22 June 2025 20:05:08 +0000 (0:00:00.532) 0:00:00.532 *********** 2025-06-22 20:08:10.293969 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:08:10.293980 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:08:10.293991 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:08:10.294002 | orchestrator | 2025-06-22 20:08:10.294013 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 20:08:10.294283 | orchestrator | Sunday 22 June 2025 20:05:08 +0000 (0:00:00.459) 0:00:00.991 *********** 2025-06-22 20:08:10.294296 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-06-22 20:08:10.294308 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-06-22 20:08:10.294319 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-06-22 20:08:10.294330 | orchestrator | 2025-06-22 20:08:10.294341 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-06-22 20:08:10.294352 | orchestrator | 2025-06-22 20:08:10.294363 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-06-22 20:08:10.294375 | orchestrator | Sunday 22 June 2025 20:05:09 +0000 (0:00:00.588) 0:00:01.580 *********** 2025-06-22 20:08:10.294386 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:08:10.294397 | orchestrator | 2025-06-22 20:08:10.294422 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-06-22 20:08:10.294434 | orchestrator | Sunday 22 June 2025 20:05:09 +0000 (0:00:00.661) 0:00:02.241 *********** 2025-06-22 20:08:10.294445 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-06-22 20:08:10.294520 | orchestrator | 2025-06-22 20:08:10.294533 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-06-22 20:08:10.294545 | orchestrator | Sunday 22 June 2025 20:05:13 +0000 (0:00:03.882) 0:00:06.124 *********** 2025-06-22 20:08:10.294555 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-06-22 20:08:10.294567 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-06-22 20:08:10.294578 | orchestrator | 2025-06-22 20:08:10.294589 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-06-22 20:08:10.294601 | orchestrator | Sunday 22 June 2025 20:05:20 +0000 (0:00:07.001) 0:00:13.126 *********** 2025-06-22 20:08:10.294612 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-06-22 20:08:10.294623 | orchestrator | 2025-06-22 20:08:10.294634 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-06-22 20:08:10.294646 | orchestrator | Sunday 22 June 2025 20:05:24 +0000 (0:00:03.424) 0:00:16.550 *********** 2025-06-22 20:08:10.294679 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-22 20:08:10.294691 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-06-22 20:08:10.294702 | orchestrator | 2025-06-22 20:08:10.294714 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-06-22 20:08:10.294724 | orchestrator | Sunday 22 June 2025 20:05:28 +0000 (0:00:03.917) 0:00:20.467 *********** 2025-06-22 20:08:10.294736 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-22 20:08:10.294747 | orchestrator | 2025-06-22 20:08:10.294758 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-06-22 20:08:10.294769 | orchestrator | Sunday 22 June 2025 20:05:31 +0000 (0:00:03.316) 0:00:23.784 *********** 2025-06-22 20:08:10.294780 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-06-22 20:08:10.294791 | orchestrator | 2025-06-22 20:08:10.294802 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-06-22 20:08:10.294813 | orchestrator | Sunday 22 June 2025 20:05:35 +0000 (0:00:03.918) 0:00:27.702 *********** 2025-06-22 20:08:10.294827 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-22 20:08:10.294872 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-22 20:08:10.294885 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-22 20:08:10.294898 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-22 20:08:10.294918 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-22 20:08:10.294930 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-22 20:08:10.294946 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:10.294969 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:10.294982 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:10.294994 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:10.295011 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:10.295095 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:10.295108 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:10.295133 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:10.295145 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:10.295157 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:10.295180 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:10.295192 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:10.295203 | orchestrator | 2025-06-22 20:08:10.295215 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-06-22 20:08:10.295226 | orchestrator | Sunday 22 June 2025 20:05:38 +0000 (0:00:02.899) 0:00:30.602 *********** 2025-06-22 20:08:10.295237 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:08:10.295248 | orchestrator | 2025-06-22 20:08:10.295259 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-06-22 20:08:10.295271 | orchestrator | Sunday 22 June 2025 20:05:38 +0000 (0:00:00.143) 0:00:30.746 *********** 2025-06-22 20:08:10.295392 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:08:10.295406 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:08:10.295417 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:08:10.295428 | orchestrator | 2025-06-22 20:08:10.295463 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-06-22 20:08:10.295475 | orchestrator | Sunday 22 June 2025 20:05:38 +0000 (0:00:00.266) 0:00:31.012 *********** 2025-06-22 20:08:10.295486 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:08:10.295498 | orchestrator | 2025-06-22 20:08:10.295509 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-06-22 20:08:10.295520 | orchestrator | Sunday 22 June 2025 20:05:39 +0000 (0:00:00.564) 0:00:31.576 *********** 2025-06-22 20:08:10.295544 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-22 20:08:10.295557 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-22 20:08:10.295576 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-22 20:08:10.295588 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-22 20:08:10.295600 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-22 20:08:10.295616 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-22 20:08:10.295634 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:10.295652 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:10.295663 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:10.295675 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:10.295686 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:10.295698 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:10.295714 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:10.295732 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:10.295750 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:10.295761 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:10.295773 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:10.295784 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:10.295795 | orchestrator | 2025-06-22 20:08:10.295807 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-06-22 20:08:10.295818 | orchestrator | Sunday 22 June 2025 20:05:45 +0000 (0:00:06.327) 0:00:37.904 *********** 2025-06-22 20:08:10.295834 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-22 20:08:10.295864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-22 20:08:10.295877 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-22 20:08:10.295889 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-22 20:08:10.295900 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-22 20:08:10.295912 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-22 20:08:10.295923 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:08:10.295939 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-22 20:08:10.295964 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-22 20:08:10.295976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-22 20:08:10.295987 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-22 20:08:10.296006 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-22 20:08:10.296027 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-22 20:08:10.296067 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:08:10.296092 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-22 20:08:10.296132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-22 20:08:10.296152 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-22 20:08:10.296171 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-22 20:08:10.296190 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-22 20:08:10.296209 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-22 20:08:10.296229 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:08:10.296249 | orchestrator | 2025-06-22 20:08:10.296267 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-06-22 20:08:10.296286 | orchestrator | Sunday 22 June 2025 20:05:46 +0000 (0:00:01.340) 0:00:39.245 *********** 2025-06-22 20:08:10.296304 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-22 20:08:10.296332 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-22 20:08:10.296344 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-22 20:08:10.296356 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-22 20:08:10.296367 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-22 20:08:10.296379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-22 20:08:10.296390 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:08:10.296406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-22 20:08:10.296431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-22 20:08:10.296443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-22 20:08:10.296454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-22 20:08:10.296466 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-22 20:08:10.296477 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-22 20:08:10.296489 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:08:10.296510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-22 20:08:10.296529 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-22 20:08:10.296541 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-22 20:08:10.296553 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-22 20:08:10.296564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-22 20:08:10.296575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-22 20:08:10.296593 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:08:10.296604 | orchestrator | 2025-06-22 20:08:10.296615 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-06-22 20:08:10.296627 | orchestrator | Sunday 22 June 2025 20:05:47 +0000 (0:00:01.044) 0:00:40.289 *********** 2025-06-22 20:08:10.296643 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-22 20:08:10.296705 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-22 20:08:10.296721 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-22 20:08:10.296733 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-22 20:08:10.296745 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-22 20:08:10.296763 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-22 20:08:10.296788 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:10.296800 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:10.296812 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:10.296824 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:10.296835 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:10.296857 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:10.296873 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:10.296893 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:10.296905 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:10.296916 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:10.296928 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:10.296948 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:10.296988 | orchestrator | 2025-06-22 20:08:10.297010 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-06-22 20:08:10.297028 | orchestrator | Sunday 22 June 2025 20:05:54 +0000 (0:00:06.696) 0:00:46.985 *********** 2025-06-22 20:08:10.297140 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-22 20:08:10.297998 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-22 20:08:10.298110 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-22 20:08:10.298131 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-22 20:08:10.298184 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-22 20:08:10.298198 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-22 20:08:10.298214 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:10.298234 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:10.298245 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:10.298255 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:10.298271 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:10.298282 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:10.298292 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:10.298307 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:10.298317 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:10.298328 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:10.298364 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:10.298380 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:10.298391 | orchestrator | 2025-06-22 20:08:10.298401 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-06-22 20:08:10.298412 | orchestrator | Sunday 22 June 2025 20:06:17 +0000 (0:00:22.555) 0:01:09.541 *********** 2025-06-22 20:08:10.298422 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-06-22 20:08:10.298432 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-06-22 20:08:10.298442 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-06-22 20:08:10.298451 | orchestrator | 2025-06-22 20:08:10.298462 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-06-22 20:08:10.298471 | orchestrator | Sunday 22 June 2025 20:06:22 +0000 (0:00:05.602) 0:01:15.144 *********** 2025-06-22 20:08:10.298481 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-06-22 20:08:10.298491 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-06-22 20:08:10.298500 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-06-22 20:08:10.298510 | orchestrator | 2025-06-22 20:08:10.298520 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-06-22 20:08:10.298530 | orchestrator | Sunday 22 June 2025 20:06:26 +0000 (0:00:04.017) 0:01:19.161 *********** 2025-06-22 20:08:10.298550 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-22 20:08:10.298562 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-22 20:08:10.298581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-22 20:08:10.298618 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-22 20:08:10.298631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-22 20:08:10.298647 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-22 20:08:10.298665 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-22 20:08:10.298677 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-22 20:08:10.298696 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-22 20:08:10.298708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-22 20:08:10.298720 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-22 20:08:10.298735 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-22 20:08:10.298753 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-22 20:08:10.298765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-22 20:08:10.298782 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-22 20:08:10.298794 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:10.298806 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:10.298818 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:10.298829 | orchestrator | 2025-06-22 20:08:10.298840 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-06-22 20:08:10.298852 | orchestrator | Sunday 22 June 2025 20:06:29 +0000 (0:00:02.784) 0:01:21.945 *********** 2025-06-22 20:08:10.298873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-22 20:08:10.298885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-22 20:08:10.298902 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-22 20:08:10.298914 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-22 20:08:10.298926 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-22 20:08:10.298942 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-22 20:08:10.298960 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-22 20:08:10.298976 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-22 20:08:10.298986 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-22 20:08:10.298997 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-22 20:08:10.299007 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-22 20:08:10.299017 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-22 20:08:10.299036 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-22 20:08:10.299065 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-22 20:08:10.299082 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-22 20:08:10.299092 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:10.299103 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:10.299113 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:10.299123 | orchestrator | 2025-06-22 20:08:10.299133 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-06-22 20:08:10.299143 | orchestrator | Sunday 22 June 2025 20:06:32 +0000 (0:00:02.976) 0:01:24.922 *********** 2025-06-22 20:08:10.299153 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:08:10.299163 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:08:10.299173 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:08:10.299184 | orchestrator | 2025-06-22 20:08:10.299201 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-06-22 20:08:10.299211 | orchestrator | Sunday 22 June 2025 20:06:32 +0000 (0:00:00.371) 0:01:25.294 *********** 2025-06-22 20:08:10.299232 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-22 20:08:10.299249 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-22 20:08:10.299260 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-22 20:08:10.299270 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-22 20:08:10.299281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-22 20:08:10.299291 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-22 20:08:10.299301 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:08:10.299321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-22 20:08:10.299337 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-22 20:08:10.299347 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-22 20:08:10.299358 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-22 20:08:10.299368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-06-22 20:08:10.299379 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-22 20:08:10.299408 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-06-22 20:08:10.299458 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-22 20:08:10.299468 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:08:10.299479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-06-22 20:08:10.299489 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-06-22 20:08:10.299499 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-06-22 20:08:10.299509 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-06-22 20:08:10.299526 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:08:10.299536 | orchestrator | 2025-06-22 20:08:10.299545 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-06-22 20:08:10.299556 | orchestrator | Sunday 22 June 2025 20:06:34 +0000 (0:00:01.215) 0:01:26.509 *********** 2025-06-22 20:08:10.299576 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-22 20:08:10.299587 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-22 20:08:10.299597 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-06-22 20:08:10.299607 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-22 20:08:10.299618 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-22 20:08:10.299643 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-06-22 20:08:10.299660 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:10.299671 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:10.299681 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:10.299692 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:10.299702 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:10.299718 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:10.299737 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:10.299748 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:10.299758 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:10.299768 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:10.299778 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:10.299788 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-06-22 20:08:10.299806 | orchestrator | 2025-06-22 20:08:10.299816 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-06-22 20:08:10.299826 | orchestrator | Sunday 22 June 2025 20:06:39 +0000 (0:00:05.796) 0:01:32.306 *********** 2025-06-22 20:08:10.299836 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:08:10.299846 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:08:10.299856 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:08:10.299865 | orchestrator | 2025-06-22 20:08:10.299875 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-06-22 20:08:10.299885 | orchestrator | Sunday 22 June 2025 20:06:40 +0000 (0:00:00.450) 0:01:32.756 *********** 2025-06-22 20:08:10.299899 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-06-22 20:08:10.299909 | orchestrator | 2025-06-22 20:08:10.299919 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-06-22 20:08:10.299929 | orchestrator | Sunday 22 June 2025 20:06:42 +0000 (0:00:02.592) 0:01:35.349 *********** 2025-06-22 20:08:10.299939 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-22 20:08:10.299949 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-06-22 20:08:10.299958 | orchestrator | 2025-06-22 20:08:10.299968 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-06-22 20:08:10.299983 | orchestrator | Sunday 22 June 2025 20:06:45 +0000 (0:00:02.575) 0:01:37.924 *********** 2025-06-22 20:08:10.299993 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:08:10.300003 | orchestrator | 2025-06-22 20:08:10.300013 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-06-22 20:08:10.300023 | orchestrator | Sunday 22 June 2025 20:07:02 +0000 (0:00:17.152) 0:01:55.077 *********** 2025-06-22 20:08:10.300032 | orchestrator | 2025-06-22 20:08:10.300059 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-06-22 20:08:10.300069 | orchestrator | Sunday 22 June 2025 20:07:02 +0000 (0:00:00.137) 0:01:55.215 *********** 2025-06-22 20:08:10.300079 | orchestrator | 2025-06-22 20:08:10.300089 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-06-22 20:08:10.300099 | orchestrator | Sunday 22 June 2025 20:07:02 +0000 (0:00:00.163) 0:01:55.379 *********** 2025-06-22 20:08:10.300109 | orchestrator | 2025-06-22 20:08:10.300119 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-06-22 20:08:10.300129 | orchestrator | Sunday 22 June 2025 20:07:03 +0000 (0:00:00.203) 0:01:55.582 *********** 2025-06-22 20:08:10.300146 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:08:10.300156 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:08:10.300166 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:08:10.300176 | orchestrator | 2025-06-22 20:08:10.300186 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-06-22 20:08:10.300196 | orchestrator | Sunday 22 June 2025 20:07:16 +0000 (0:00:13.607) 0:02:09.190 *********** 2025-06-22 20:08:10.300206 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:08:10.300215 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:08:10.300225 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:08:10.300235 | orchestrator | 2025-06-22 20:08:10.300244 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-06-22 20:08:10.300254 | orchestrator | Sunday 22 June 2025 20:07:23 +0000 (0:00:06.980) 0:02:16.170 *********** 2025-06-22 20:08:10.300264 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:08:10.300280 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:08:10.300290 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:08:10.300299 | orchestrator | 2025-06-22 20:08:10.300309 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-06-22 20:08:10.300319 | orchestrator | Sunday 22 June 2025 20:07:33 +0000 (0:00:09.892) 0:02:26.063 *********** 2025-06-22 20:08:10.300329 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:08:10.300338 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:08:10.300348 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:08:10.300357 | orchestrator | 2025-06-22 20:08:10.300367 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-06-22 20:08:10.300377 | orchestrator | Sunday 22 June 2025 20:07:44 +0000 (0:00:10.871) 0:02:36.934 *********** 2025-06-22 20:08:10.300387 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:08:10.300397 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:08:10.300406 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:08:10.300416 | orchestrator | 2025-06-22 20:08:10.300426 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-06-22 20:08:10.300436 | orchestrator | Sunday 22 June 2025 20:07:49 +0000 (0:00:04.924) 0:02:41.858 *********** 2025-06-22 20:08:10.300446 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:08:10.300455 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:08:10.300465 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:08:10.300475 | orchestrator | 2025-06-22 20:08:10.300484 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-06-22 20:08:10.300494 | orchestrator | Sunday 22 June 2025 20:08:00 +0000 (0:00:10.601) 0:02:52.460 *********** 2025-06-22 20:08:10.300506 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:08:10.300521 | orchestrator | 2025-06-22 20:08:10.300531 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 20:08:10.300542 | orchestrator | testbed-node-0 : ok=29  changed=24  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-06-22 20:08:10.300552 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-22 20:08:10.300562 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-22 20:08:10.300572 | orchestrator | 2025-06-22 20:08:10.300582 | orchestrator | 2025-06-22 20:08:10.300592 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 20:08:10.300602 | orchestrator | Sunday 22 June 2025 20:08:07 +0000 (0:00:07.472) 0:02:59.933 *********** 2025-06-22 20:08:10.300612 | orchestrator | =============================================================================== 2025-06-22 20:08:10.300621 | orchestrator | designate : Copying over designate.conf -------------------------------- 22.56s 2025-06-22 20:08:10.300631 | orchestrator | designate : Running Designate bootstrap container ---------------------- 17.15s 2025-06-22 20:08:10.300641 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 13.61s 2025-06-22 20:08:10.300651 | orchestrator | designate : Restart designate-producer container ----------------------- 10.87s 2025-06-22 20:08:10.300660 | orchestrator | designate : Restart designate-worker container ------------------------- 10.60s 2025-06-22 20:08:10.300674 | orchestrator | designate : Restart designate-central container ------------------------- 9.89s 2025-06-22 20:08:10.300684 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.47s 2025-06-22 20:08:10.300694 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 7.00s 2025-06-22 20:08:10.300703 | orchestrator | designate : Restart designate-api container ----------------------------- 6.98s 2025-06-22 20:08:10.300713 | orchestrator | designate : Copying over config.json files for services ----------------- 6.70s 2025-06-22 20:08:10.300844 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.33s 2025-06-22 20:08:10.300865 | orchestrator | designate : Check designate containers ---------------------------------- 5.80s 2025-06-22 20:08:10.300875 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 5.60s 2025-06-22 20:08:10.300885 | orchestrator | designate : Restart designate-mdns container ---------------------------- 4.92s 2025-06-22 20:08:10.300895 | orchestrator | designate : Copying over named.conf ------------------------------------- 4.02s 2025-06-22 20:08:10.300905 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 3.92s 2025-06-22 20:08:10.300914 | orchestrator | service-ks-register : designate | Creating users ------------------------ 3.92s 2025-06-22 20:08:10.300924 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.88s 2025-06-22 20:08:10.300934 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.42s 2025-06-22 20:08:10.300943 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.32s 2025-06-22 20:08:10.300953 | orchestrator | 2025-06-22 20:08:10 | INFO  | Task e32660a2-c720-4e13-8ac4-b8cb4d631c69 is in state STARTED 2025-06-22 20:08:10.300963 | orchestrator | 2025-06-22 20:08:10 | INFO  | Task 4a0f0d72-b0bb-4180-a478-51f31e6edf32 is in state STARTED 2025-06-22 20:08:10.300973 | orchestrator | 2025-06-22 20:08:10 | INFO  | Task 30f90179-d050-4e0b-8490-889c04ba6800 is in state STARTED 2025-06-22 20:08:10.300983 | orchestrator | 2025-06-22 20:08:10 | INFO  | Task 1fe1d1a9-d01a-4de9-87ea-ee48d37d85d0 is in state STARTED 2025-06-22 20:08:10.300993 | orchestrator | 2025-06-22 20:08:10 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:08:13.348462 | orchestrator | 2025-06-22 20:08:13 | INFO  | Task e32660a2-c720-4e13-8ac4-b8cb4d631c69 is in state STARTED 2025-06-22 20:08:13.350008 | orchestrator | 2025-06-22 20:08:13 | INFO  | Task 4a0f0d72-b0bb-4180-a478-51f31e6edf32 is in state STARTED 2025-06-22 20:08:13.352266 | orchestrator | 2025-06-22 20:08:13 | INFO  | Task 30f90179-d050-4e0b-8490-889c04ba6800 is in state STARTED 2025-06-22 20:08:13.354558 | orchestrator | 2025-06-22 20:08:13 | INFO  | Task 1fe1d1a9-d01a-4de9-87ea-ee48d37d85d0 is in state STARTED 2025-06-22 20:08:13.354978 | orchestrator | 2025-06-22 20:08:13 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:08:16.396678 | orchestrator | 2025-06-22 20:08:16 | INFO  | Task e32660a2-c720-4e13-8ac4-b8cb4d631c69 is in state STARTED 2025-06-22 20:08:16.398648 | orchestrator | 2025-06-22 20:08:16 | INFO  | Task 4a0f0d72-b0bb-4180-a478-51f31e6edf32 is in state STARTED 2025-06-22 20:08:16.401306 | orchestrator | 2025-06-22 20:08:16 | INFO  | Task 30f90179-d050-4e0b-8490-889c04ba6800 is in state STARTED 2025-06-22 20:08:16.403698 | orchestrator | 2025-06-22 20:08:16 | INFO  | Task 1fe1d1a9-d01a-4de9-87ea-ee48d37d85d0 is in state STARTED 2025-06-22 20:08:16.404103 | orchestrator | 2025-06-22 20:08:16 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:08:19.442592 | orchestrator | 2025-06-22 20:08:19 | INFO  | Task e32660a2-c720-4e13-8ac4-b8cb4d631c69 is in state STARTED 2025-06-22 20:08:19.444477 | orchestrator | 2025-06-22 20:08:19 | INFO  | Task 4a0f0d72-b0bb-4180-a478-51f31e6edf32 is in state STARTED 2025-06-22 20:08:19.446552 | orchestrator | 2025-06-22 20:08:19 | INFO  | Task 30f90179-d050-4e0b-8490-889c04ba6800 is in state STARTED 2025-06-22 20:08:19.448330 | orchestrator | 2025-06-22 20:08:19 | INFO  | Task 1fe1d1a9-d01a-4de9-87ea-ee48d37d85d0 is in state STARTED 2025-06-22 20:08:19.448532 | orchestrator | 2025-06-22 20:08:19 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:08:22.499308 | orchestrator | 2025-06-22 20:08:22 | INFO  | Task e32660a2-c720-4e13-8ac4-b8cb4d631c69 is in state STARTED 2025-06-22 20:08:22.499485 | orchestrator | 2025-06-22 20:08:22 | INFO  | Task 4a0f0d72-b0bb-4180-a478-51f31e6edf32 is in state STARTED 2025-06-22 20:08:22.500378 | orchestrator | 2025-06-22 20:08:22 | INFO  | Task 30f90179-d050-4e0b-8490-889c04ba6800 is in state STARTED 2025-06-22 20:08:22.501251 | orchestrator | 2025-06-22 20:08:22 | INFO  | Task 1fe1d1a9-d01a-4de9-87ea-ee48d37d85d0 is in state STARTED 2025-06-22 20:08:22.501275 | orchestrator | 2025-06-22 20:08:22 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:08:25.545347 | orchestrator | 2025-06-22 20:08:25 | INFO  | Task e32660a2-c720-4e13-8ac4-b8cb4d631c69 is in state STARTED 2025-06-22 20:08:25.545578 | orchestrator | 2025-06-22 20:08:25 | INFO  | Task 4a0f0d72-b0bb-4180-a478-51f31e6edf32 is in state STARTED 2025-06-22 20:08:25.547287 | orchestrator | 2025-06-22 20:08:25 | INFO  | Task 30f90179-d050-4e0b-8490-889c04ba6800 is in state STARTED 2025-06-22 20:08:25.548668 | orchestrator | 2025-06-22 20:08:25 | INFO  | Task 1fe1d1a9-d01a-4de9-87ea-ee48d37d85d0 is in state STARTED 2025-06-22 20:08:25.548705 | orchestrator | 2025-06-22 20:08:25 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:08:28.588969 | orchestrator | 2025-06-22 20:08:28 | INFO  | Task e32660a2-c720-4e13-8ac4-b8cb4d631c69 is in state STARTED 2025-06-22 20:08:28.589689 | orchestrator | 2025-06-22 20:08:28 | INFO  | Task 646366b8-91b8-42df-a017-d5aada029597 is in state STARTED 2025-06-22 20:08:28.592083 | orchestrator | 2025-06-22 20:08:28 | INFO  | Task 4a0f0d72-b0bb-4180-a478-51f31e6edf32 is in state STARTED 2025-06-22 20:08:28.593210 | orchestrator | 2025-06-22 20:08:28 | INFO  | Task 30f90179-d050-4e0b-8490-889c04ba6800 is in state SUCCESS 2025-06-22 20:08:28.596116 | orchestrator | 2025-06-22 20:08:28.596163 | orchestrator | 2025-06-22 20:08:28.596176 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 20:08:28.596190 | orchestrator | 2025-06-22 20:08:28.596201 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 20:08:28.596213 | orchestrator | Sunday 22 June 2025 20:07:22 +0000 (0:00:00.233) 0:00:00.233 *********** 2025-06-22 20:08:28.596224 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:08:28.596236 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:08:28.596247 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:08:28.596258 | orchestrator | 2025-06-22 20:08:28.596269 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 20:08:28.596280 | orchestrator | Sunday 22 June 2025 20:07:22 +0000 (0:00:00.397) 0:00:00.630 *********** 2025-06-22 20:08:28.596293 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-06-22 20:08:28.596304 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-06-22 20:08:28.596316 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-06-22 20:08:28.596369 | orchestrator | 2025-06-22 20:08:28.596381 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-06-22 20:08:28.596392 | orchestrator | 2025-06-22 20:08:28.596403 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-06-22 20:08:28.596415 | orchestrator | Sunday 22 June 2025 20:07:23 +0000 (0:00:00.824) 0:00:01.454 *********** 2025-06-22 20:08:28.596426 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:08:28.596466 | orchestrator | 2025-06-22 20:08:28.596480 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-06-22 20:08:28.596491 | orchestrator | Sunday 22 June 2025 20:07:24 +0000 (0:00:00.987) 0:00:02.441 *********** 2025-06-22 20:08:28.596502 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-06-22 20:08:28.596513 | orchestrator | 2025-06-22 20:08:28.596524 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-06-22 20:08:28.596536 | orchestrator | Sunday 22 June 2025 20:07:28 +0000 (0:00:03.962) 0:00:06.403 *********** 2025-06-22 20:08:28.596547 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-06-22 20:08:28.596589 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-06-22 20:08:28.596601 | orchestrator | 2025-06-22 20:08:28.596613 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-06-22 20:08:28.596625 | orchestrator | Sunday 22 June 2025 20:07:35 +0000 (0:00:06.762) 0:00:13.166 *********** 2025-06-22 20:08:28.596636 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-22 20:08:28.596647 | orchestrator | 2025-06-22 20:08:28.596660 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-06-22 20:08:28.596673 | orchestrator | Sunday 22 June 2025 20:07:38 +0000 (0:00:03.130) 0:00:16.296 *********** 2025-06-22 20:08:28.596685 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-22 20:08:28.596697 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-06-22 20:08:28.596722 | orchestrator | 2025-06-22 20:08:28.596735 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-06-22 20:08:28.596748 | orchestrator | Sunday 22 June 2025 20:07:42 +0000 (0:00:04.426) 0:00:20.723 *********** 2025-06-22 20:08:28.596760 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-22 20:08:28.596772 | orchestrator | 2025-06-22 20:08:28.596784 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-06-22 20:08:28.596797 | orchestrator | Sunday 22 June 2025 20:07:45 +0000 (0:00:03.197) 0:00:23.920 *********** 2025-06-22 20:08:28.596810 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-06-22 20:08:28.596823 | orchestrator | 2025-06-22 20:08:28.596836 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-06-22 20:08:28.596848 | orchestrator | Sunday 22 June 2025 20:07:49 +0000 (0:00:03.980) 0:00:27.900 *********** 2025-06-22 20:08:28.596861 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:08:28.596873 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:08:28.596886 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:08:28.596898 | orchestrator | 2025-06-22 20:08:28.596910 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-06-22 20:08:28.596923 | orchestrator | Sunday 22 June 2025 20:07:50 +0000 (0:00:00.296) 0:00:28.197 *********** 2025-06-22 20:08:28.596954 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-22 20:08:28.596989 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-22 20:08:28.597015 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-22 20:08:28.597026 | orchestrator | 2025-06-22 20:08:28.597059 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-06-22 20:08:28.597071 | orchestrator | Sunday 22 June 2025 20:07:51 +0000 (0:00:01.080) 0:00:29.277 *********** 2025-06-22 20:08:28.597082 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:08:28.597093 | orchestrator | 2025-06-22 20:08:28.597105 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-06-22 20:08:28.597116 | orchestrator | Sunday 22 June 2025 20:07:51 +0000 (0:00:00.102) 0:00:29.380 *********** 2025-06-22 20:08:28.597127 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:08:28.597138 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:08:28.597149 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:08:28.597160 | orchestrator | 2025-06-22 20:08:28.597171 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-06-22 20:08:28.597182 | orchestrator | Sunday 22 June 2025 20:07:51 +0000 (0:00:00.414) 0:00:29.795 *********** 2025-06-22 20:08:28.597193 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:08:28.597205 | orchestrator | 2025-06-22 20:08:28.597215 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-06-22 20:08:28.597226 | orchestrator | Sunday 22 June 2025 20:07:52 +0000 (0:00:00.450) 0:00:30.245 *********** 2025-06-22 20:08:28.597244 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-22 20:08:28.597267 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-22 20:08:28.597286 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-22 20:08:28.597298 | orchestrator | 2025-06-22 20:08:28.597309 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-06-22 20:08:28.597320 | orchestrator | Sunday 22 June 2025 20:07:53 +0000 (0:00:01.486) 0:00:31.732 *********** 2025-06-22 20:08:28.597332 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-22 20:08:28.597343 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:08:28.597361 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-22 20:08:28.597372 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:08:28.597391 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-22 20:08:28.597409 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:08:28.597420 | orchestrator | 2025-06-22 20:08:28.597431 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-06-22 20:08:28.597442 | orchestrator | Sunday 22 June 2025 20:07:54 +0000 (0:00:00.594) 0:00:32.327 *********** 2025-06-22 20:08:28.597453 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-22 20:08:28.597465 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:08:28.597476 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-22 20:08:28.597488 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:08:28.597504 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-22 20:08:28.597515 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:08:28.597526 | orchestrator | 2025-06-22 20:08:28.597537 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-06-22 20:08:28.597548 | orchestrator | Sunday 22 June 2025 20:07:54 +0000 (0:00:00.641) 0:00:32.969 *********** 2025-06-22 20:08:28.597568 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-22 20:08:28.597587 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-22 20:08:28.597599 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-22 20:08:28.597610 | orchestrator | 2025-06-22 20:08:28.597621 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-06-22 20:08:28.597632 | orchestrator | Sunday 22 June 2025 20:07:56 +0000 (0:00:01.283) 0:00:34.252 *********** 2025-06-22 20:08:28.597648 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-22 20:08:28.597661 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-22 20:08:28.597686 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-22 20:08:28.597698 | orchestrator | 2025-06-22 20:08:28.597709 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-06-22 20:08:28.597720 | orchestrator | Sunday 22 June 2025 20:07:58 +0000 (0:00:02.141) 0:00:36.393 *********** 2025-06-22 20:08:28.597731 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-06-22 20:08:28.597742 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-06-22 20:08:28.597753 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-06-22 20:08:28.597764 | orchestrator | 2025-06-22 20:08:28.597775 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-06-22 20:08:28.597786 | orchestrator | Sunday 22 June 2025 20:07:59 +0000 (0:00:01.543) 0:00:37.936 *********** 2025-06-22 20:08:28.597797 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:08:28.597808 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:08:28.597819 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:08:28.597830 | orchestrator | 2025-06-22 20:08:28.597841 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-06-22 20:08:28.597852 | orchestrator | Sunday 22 June 2025 20:08:01 +0000 (0:00:01.295) 0:00:39.232 *********** 2025-06-22 20:08:28.597863 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-22 20:08:28.597875 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:08:28.597904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-22 20:08:28.597916 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:08:28.597935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-06-22 20:08:28.597947 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:08:28.597958 | orchestrator | 2025-06-22 20:08:28.597968 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-06-22 20:08:28.597979 | orchestrator | Sunday 22 June 2025 20:08:01 +0000 (0:00:00.453) 0:00:39.686 *********** 2025-06-22 20:08:28.597991 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-22 20:08:28.598002 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-22 20:08:28.598165 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-06-22 20:08:28.598184 | orchestrator | 2025-06-22 20:08:28.598195 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-06-22 20:08:28.598206 | orchestrator | Sunday 22 June 2025 20:08:02 +0000 (0:00:01.361) 0:00:41.048 *********** 2025-06-22 20:08:28.598217 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:08:28.598228 | orchestrator | 2025-06-22 20:08:28.598239 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-06-22 20:08:28.598250 | orchestrator | Sunday 22 June 2025 20:08:05 +0000 (0:00:02.208) 0:00:43.256 *********** 2025-06-22 20:08:28.598261 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:08:28.598272 | orchestrator | 2025-06-22 20:08:28.598283 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-06-22 20:08:28.598294 | orchestrator | Sunday 22 June 2025 20:08:07 +0000 (0:00:02.359) 0:00:45.616 *********** 2025-06-22 20:08:28.598313 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:08:28.598325 | orchestrator | 2025-06-22 20:08:28.598337 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-06-22 20:08:28.598348 | orchestrator | Sunday 22 June 2025 20:08:21 +0000 (0:00:13.774) 0:00:59.390 *********** 2025-06-22 20:08:28.598359 | orchestrator | 2025-06-22 20:08:28.598370 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-06-22 20:08:28.598381 | orchestrator | Sunday 22 June 2025 20:08:21 +0000 (0:00:00.061) 0:00:59.452 *********** 2025-06-22 20:08:28.598392 | orchestrator | 2025-06-22 20:08:28.598403 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-06-22 20:08:28.598414 | orchestrator | Sunday 22 June 2025 20:08:21 +0000 (0:00:00.066) 0:00:59.518 *********** 2025-06-22 20:08:28.598425 | orchestrator | 2025-06-22 20:08:28.598436 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-06-22 20:08:28.598447 | orchestrator | Sunday 22 June 2025 20:08:21 +0000 (0:00:00.065) 0:00:59.583 *********** 2025-06-22 20:08:28.598458 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:08:28.598469 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:08:28.598480 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:08:28.598491 | orchestrator | 2025-06-22 20:08:28.598502 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 20:08:28.598514 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-22 20:08:28.598526 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-22 20:08:28.598537 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-22 20:08:28.598548 | orchestrator | 2025-06-22 20:08:28.598559 | orchestrator | 2025-06-22 20:08:28.598570 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 20:08:28.598589 | orchestrator | Sunday 22 June 2025 20:08:26 +0000 (0:00:05.235) 0:01:04.818 *********** 2025-06-22 20:08:28.598601 | orchestrator | =============================================================================== 2025-06-22 20:08:28.598612 | orchestrator | placement : Running placement bootstrap container ---------------------- 13.77s 2025-06-22 20:08:28.598622 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.76s 2025-06-22 20:08:28.598633 | orchestrator | placement : Restart placement-api container ----------------------------- 5.24s 2025-06-22 20:08:28.598644 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.43s 2025-06-22 20:08:28.598655 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 3.98s 2025-06-22 20:08:28.598666 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.96s 2025-06-22 20:08:28.598676 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.20s 2025-06-22 20:08:28.598687 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.13s 2025-06-22 20:08:28.598698 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.36s 2025-06-22 20:08:28.598709 | orchestrator | placement : Creating placement databases -------------------------------- 2.21s 2025-06-22 20:08:28.598720 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.14s 2025-06-22 20:08:28.598731 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.54s 2025-06-22 20:08:28.598742 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.49s 2025-06-22 20:08:28.598753 | orchestrator | placement : Check placement containers ---------------------------------- 1.36s 2025-06-22 20:08:28.598764 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.30s 2025-06-22 20:08:28.598775 | orchestrator | placement : Copying over config.json files for services ----------------- 1.28s 2025-06-22 20:08:28.598786 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.08s 2025-06-22 20:08:28.598797 | orchestrator | placement : include_tasks ----------------------------------------------- 0.99s 2025-06-22 20:08:28.598808 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.82s 2025-06-22 20:08:28.598824 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.64s 2025-06-22 20:08:28.598840 | orchestrator | 2025-06-22 20:08:28 | INFO  | Task 1fe1d1a9-d01a-4de9-87ea-ee48d37d85d0 is in state STARTED 2025-06-22 20:08:28.598851 | orchestrator | 2025-06-22 20:08:28 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:08:31.634237 | orchestrator | 2025-06-22 20:08:31 | INFO  | Task e32660a2-c720-4e13-8ac4-b8cb4d631c69 is in state STARTED 2025-06-22 20:08:31.635034 | orchestrator | 2025-06-22 20:08:31 | INFO  | Task 646366b8-91b8-42df-a017-d5aada029597 is in state STARTED 2025-06-22 20:08:31.636331 | orchestrator | 2025-06-22 20:08:31 | INFO  | Task 4a0f0d72-b0bb-4180-a478-51f31e6edf32 is in state STARTED 2025-06-22 20:08:31.638167 | orchestrator | 2025-06-22 20:08:31 | INFO  | Task 1fe1d1a9-d01a-4de9-87ea-ee48d37d85d0 is in state STARTED 2025-06-22 20:08:31.638231 | orchestrator | 2025-06-22 20:08:31 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:08:34.699780 | orchestrator | 2025-06-22 20:08:34 | INFO  | Task e32660a2-c720-4e13-8ac4-b8cb4d631c69 is in state STARTED 2025-06-22 20:08:34.701132 | orchestrator | 2025-06-22 20:08:34 | INFO  | Task 9ad8b6cf-4278-45ca-8d5b-2f3dea7e4b25 is in state STARTED 2025-06-22 20:08:34.702635 | orchestrator | 2025-06-22 20:08:34 | INFO  | Task 646366b8-91b8-42df-a017-d5aada029597 is in state SUCCESS 2025-06-22 20:08:34.704633 | orchestrator | 2025-06-22 20:08:34 | INFO  | Task 4a0f0d72-b0bb-4180-a478-51f31e6edf32 is in state STARTED 2025-06-22 20:08:34.705689 | orchestrator | 2025-06-22 20:08:34 | INFO  | Task 1fe1d1a9-d01a-4de9-87ea-ee48d37d85d0 is in state STARTED 2025-06-22 20:08:34.705920 | orchestrator | 2025-06-22 20:08:34 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:08:37.755386 | orchestrator | 2025-06-22 20:08:37 | INFO  | Task e32660a2-c720-4e13-8ac4-b8cb4d631c69 is in state STARTED 2025-06-22 20:08:37.758965 | orchestrator | 2025-06-22 20:08:37 | INFO  | Task 9ad8b6cf-4278-45ca-8d5b-2f3dea7e4b25 is in state STARTED 2025-06-22 20:08:37.761411 | orchestrator | 2025-06-22 20:08:37 | INFO  | Task 4a0f0d72-b0bb-4180-a478-51f31e6edf32 is in state STARTED 2025-06-22 20:08:37.763928 | orchestrator | 2025-06-22 20:08:37 | INFO  | Task 1fe1d1a9-d01a-4de9-87ea-ee48d37d85d0 is in state STARTED 2025-06-22 20:08:37.764145 | orchestrator | 2025-06-22 20:08:37 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:08:40.809005 | orchestrator | 2025-06-22 20:08:40 | INFO  | Task e32660a2-c720-4e13-8ac4-b8cb4d631c69 is in state STARTED 2025-06-22 20:08:40.810598 | orchestrator | 2025-06-22 20:08:40 | INFO  | Task 9ad8b6cf-4278-45ca-8d5b-2f3dea7e4b25 is in state STARTED 2025-06-22 20:08:40.812688 | orchestrator | 2025-06-22 20:08:40 | INFO  | Task 4a0f0d72-b0bb-4180-a478-51f31e6edf32 is in state STARTED 2025-06-22 20:08:40.820570 | orchestrator | 2025-06-22 20:08:40 | INFO  | Task 1fe1d1a9-d01a-4de9-87ea-ee48d37d85d0 is in state STARTED 2025-06-22 20:08:40.820615 | orchestrator | 2025-06-22 20:08:40 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:08:43.862315 | orchestrator | 2025-06-22 20:08:43 | INFO  | Task e32660a2-c720-4e13-8ac4-b8cb4d631c69 is in state STARTED 2025-06-22 20:08:43.863025 | orchestrator | 2025-06-22 20:08:43 | INFO  | Task 9ad8b6cf-4278-45ca-8d5b-2f3dea7e4b25 is in state STARTED 2025-06-22 20:08:43.864408 | orchestrator | 2025-06-22 20:08:43 | INFO  | Task 4a0f0d72-b0bb-4180-a478-51f31e6edf32 is in state STARTED 2025-06-22 20:08:43.866141 | orchestrator | 2025-06-22 20:08:43 | INFO  | Task 1fe1d1a9-d01a-4de9-87ea-ee48d37d85d0 is in state STARTED 2025-06-22 20:08:43.866163 | orchestrator | 2025-06-22 20:08:43 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:08:46.910543 | orchestrator | 2025-06-22 20:08:46 | INFO  | Task e32660a2-c720-4e13-8ac4-b8cb4d631c69 is in state STARTED 2025-06-22 20:08:46.911267 | orchestrator | 2025-06-22 20:08:46 | INFO  | Task 9ad8b6cf-4278-45ca-8d5b-2f3dea7e4b25 is in state STARTED 2025-06-22 20:08:46.913905 | orchestrator | 2025-06-22 20:08:46 | INFO  | Task 4a0f0d72-b0bb-4180-a478-51f31e6edf32 is in state STARTED 2025-06-22 20:08:46.917727 | orchestrator | 2025-06-22 20:08:46 | INFO  | Task 1fe1d1a9-d01a-4de9-87ea-ee48d37d85d0 is in state STARTED 2025-06-22 20:08:46.917763 | orchestrator | 2025-06-22 20:08:46 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:08:49.946595 | orchestrator | 2025-06-22 20:08:49 | INFO  | Task e32660a2-c720-4e13-8ac4-b8cb4d631c69 is in state STARTED 2025-06-22 20:08:49.946926 | orchestrator | 2025-06-22 20:08:49 | INFO  | Task 9ad8b6cf-4278-45ca-8d5b-2f3dea7e4b25 is in state STARTED 2025-06-22 20:08:49.947724 | orchestrator | 2025-06-22 20:08:49 | INFO  | Task 4a0f0d72-b0bb-4180-a478-51f31e6edf32 is in state STARTED 2025-06-22 20:08:49.948434 | orchestrator | 2025-06-22 20:08:49 | INFO  | Task 1fe1d1a9-d01a-4de9-87ea-ee48d37d85d0 is in state STARTED 2025-06-22 20:08:49.948458 | orchestrator | 2025-06-22 20:08:49 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:08:52.983244 | orchestrator | 2025-06-22 20:08:52 | INFO  | Task e32660a2-c720-4e13-8ac4-b8cb4d631c69 is in state STARTED 2025-06-22 20:08:52.983581 | orchestrator | 2025-06-22 20:08:52 | INFO  | Task 9ad8b6cf-4278-45ca-8d5b-2f3dea7e4b25 is in state STARTED 2025-06-22 20:08:52.984216 | orchestrator | 2025-06-22 20:08:52 | INFO  | Task 6f62686b-c4c9-497d-93c9-7e5f0b2c1f20 is in state STARTED 2025-06-22 20:08:52.984832 | orchestrator | 2025-06-22 20:08:52 | INFO  | Task 4a0f0d72-b0bb-4180-a478-51f31e6edf32 is in state STARTED 2025-06-22 20:08:52.985468 | orchestrator | 2025-06-22 20:08:52 | INFO  | Task 1fe1d1a9-d01a-4de9-87ea-ee48d37d85d0 is in state SUCCESS 2025-06-22 20:08:52.985505 | orchestrator | 2025-06-22 20:08:52 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:08:56.025103 | orchestrator | 2025-06-22 20:08:56 | INFO  | Task e32660a2-c720-4e13-8ac4-b8cb4d631c69 is in state STARTED 2025-06-22 20:08:56.025871 | orchestrator | 2025-06-22 20:08:56 | INFO  | Task 9ad8b6cf-4278-45ca-8d5b-2f3dea7e4b25 is in state STARTED 2025-06-22 20:08:56.027595 | orchestrator | 2025-06-22 20:08:56 | INFO  | Task 6f62686b-c4c9-497d-93c9-7e5f0b2c1f20 is in state STARTED 2025-06-22 20:08:56.029064 | orchestrator | 2025-06-22 20:08:56 | INFO  | Task 4a0f0d72-b0bb-4180-a478-51f31e6edf32 is in state STARTED 2025-06-22 20:08:56.029112 | orchestrator | 2025-06-22 20:08:56 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:08:59.064769 | orchestrator | 2025-06-22 20:08:59 | INFO  | Task e32660a2-c720-4e13-8ac4-b8cb4d631c69 is in state STARTED 2025-06-22 20:08:59.064900 | orchestrator | 2025-06-22 20:08:59 | INFO  | Task 9ad8b6cf-4278-45ca-8d5b-2f3dea7e4b25 is in state STARTED 2025-06-22 20:08:59.064987 | orchestrator | 2025-06-22 20:08:59 | INFO  | Task 6f62686b-c4c9-497d-93c9-7e5f0b2c1f20 is in state STARTED 2025-06-22 20:08:59.067144 | orchestrator | 2025-06-22 20:08:59 | INFO  | Task 4a0f0d72-b0bb-4180-a478-51f31e6edf32 is in state STARTED 2025-06-22 20:08:59.067191 | orchestrator | 2025-06-22 20:08:59 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:09:02.103578 | orchestrator | 2025-06-22 20:09:02 | INFO  | Task e32660a2-c720-4e13-8ac4-b8cb4d631c69 is in state STARTED 2025-06-22 20:09:02.105090 | orchestrator | 2025-06-22 20:09:02 | INFO  | Task 9ad8b6cf-4278-45ca-8d5b-2f3dea7e4b25 is in state STARTED 2025-06-22 20:09:02.105995 | orchestrator | 2025-06-22 20:09:02 | INFO  | Task 6f62686b-c4c9-497d-93c9-7e5f0b2c1f20 is in state STARTED 2025-06-22 20:09:02.106709 | orchestrator | 2025-06-22 20:09:02 | INFO  | Task 4a0f0d72-b0bb-4180-a478-51f31e6edf32 is in state STARTED 2025-06-22 20:09:02.106941 | orchestrator | 2025-06-22 20:09:02 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:09:05.149338 | orchestrator | 2025-06-22 20:09:05 | INFO  | Task e32660a2-c720-4e13-8ac4-b8cb4d631c69 is in state STARTED 2025-06-22 20:09:05.150379 | orchestrator | 2025-06-22 20:09:05 | INFO  | Task 9ad8b6cf-4278-45ca-8d5b-2f3dea7e4b25 is in state STARTED 2025-06-22 20:09:05.152125 | orchestrator | 2025-06-22 20:09:05 | INFO  | Task 6f62686b-c4c9-497d-93c9-7e5f0b2c1f20 is in state STARTED 2025-06-22 20:09:05.152817 | orchestrator | 2025-06-22 20:09:05 | INFO  | Task 4a0f0d72-b0bb-4180-a478-51f31e6edf32 is in state STARTED 2025-06-22 20:09:05.152839 | orchestrator | 2025-06-22 20:09:05 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:09:08.188215 | orchestrator | 2025-06-22 20:09:08 | INFO  | Task e32660a2-c720-4e13-8ac4-b8cb4d631c69 is in state STARTED 2025-06-22 20:09:08.189538 | orchestrator | 2025-06-22 20:09:08 | INFO  | Task 9ad8b6cf-4278-45ca-8d5b-2f3dea7e4b25 is in state STARTED 2025-06-22 20:09:08.190337 | orchestrator | 2025-06-22 20:09:08 | INFO  | Task 6f62686b-c4c9-497d-93c9-7e5f0b2c1f20 is in state STARTED 2025-06-22 20:09:08.193883 | orchestrator | 2025-06-22 20:09:08 | INFO  | Task 4a0f0d72-b0bb-4180-a478-51f31e6edf32 is in state STARTED 2025-06-22 20:09:08.193920 | orchestrator | 2025-06-22 20:09:08 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:09:11.261452 | orchestrator | 2025-06-22 20:09:11 | INFO  | Task e32660a2-c720-4e13-8ac4-b8cb4d631c69 is in state STARTED 2025-06-22 20:09:11.261574 | orchestrator | 2025-06-22 20:09:11 | INFO  | Task 9ad8b6cf-4278-45ca-8d5b-2f3dea7e4b25 is in state STARTED 2025-06-22 20:09:11.262788 | orchestrator | 2025-06-22 20:09:11 | INFO  | Task 6f62686b-c4c9-497d-93c9-7e5f0b2c1f20 is in state STARTED 2025-06-22 20:09:11.264185 | orchestrator | 2025-06-22 20:09:11 | INFO  | Task 4a0f0d72-b0bb-4180-a478-51f31e6edf32 is in state STARTED 2025-06-22 20:09:11.264368 | orchestrator | 2025-06-22 20:09:11 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:09:14.307469 | orchestrator | 2025-06-22 20:09:14 | INFO  | Task e32660a2-c720-4e13-8ac4-b8cb4d631c69 is in state STARTED 2025-06-22 20:09:14.307718 | orchestrator | 2025-06-22 20:09:14 | INFO  | Task 9ad8b6cf-4278-45ca-8d5b-2f3dea7e4b25 is in state STARTED 2025-06-22 20:09:14.310705 | orchestrator | 2025-06-22 20:09:14 | INFO  | Task 6f62686b-c4c9-497d-93c9-7e5f0b2c1f20 is in state STARTED 2025-06-22 20:09:14.311317 | orchestrator | 2025-06-22 20:09:14 | INFO  | Task 4a0f0d72-b0bb-4180-a478-51f31e6edf32 is in state STARTED 2025-06-22 20:09:14.311353 | orchestrator | 2025-06-22 20:09:14 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:09:17.352085 | orchestrator | 2025-06-22 20:09:17 | INFO  | Task e32660a2-c720-4e13-8ac4-b8cb4d631c69 is in state STARTED 2025-06-22 20:09:17.352170 | orchestrator | 2025-06-22 20:09:17 | INFO  | Task 9ad8b6cf-4278-45ca-8d5b-2f3dea7e4b25 is in state STARTED 2025-06-22 20:09:17.352470 | orchestrator | 2025-06-22 20:09:17 | INFO  | Task 6f62686b-c4c9-497d-93c9-7e5f0b2c1f20 is in state STARTED 2025-06-22 20:09:17.354865 | orchestrator | 2025-06-22 20:09:17 | INFO  | Task 4a0f0d72-b0bb-4180-a478-51f31e6edf32 is in state STARTED 2025-06-22 20:09:17.354890 | orchestrator | 2025-06-22 20:09:17 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:09:20.388545 | orchestrator | 2025-06-22 20:09:20 | INFO  | Task e32660a2-c720-4e13-8ac4-b8cb4d631c69 is in state STARTED 2025-06-22 20:09:20.389768 | orchestrator | 2025-06-22 20:09:20 | INFO  | Task 9ad8b6cf-4278-45ca-8d5b-2f3dea7e4b25 is in state STARTED 2025-06-22 20:09:20.389800 | orchestrator | 2025-06-22 20:09:20 | INFO  | Task 6f62686b-c4c9-497d-93c9-7e5f0b2c1f20 is in state STARTED 2025-06-22 20:09:20.390617 | orchestrator | 2025-06-22 20:09:20 | INFO  | Task 4a0f0d72-b0bb-4180-a478-51f31e6edf32 is in state STARTED 2025-06-22 20:09:20.390646 | orchestrator | 2025-06-22 20:09:20 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:09:23.418992 | orchestrator | 2025-06-22 20:09:23 | INFO  | Task e32660a2-c720-4e13-8ac4-b8cb4d631c69 is in state STARTED 2025-06-22 20:09:23.419129 | orchestrator | 2025-06-22 20:09:23 | INFO  | Task 9ad8b6cf-4278-45ca-8d5b-2f3dea7e4b25 is in state STARTED 2025-06-22 20:09:23.419476 | orchestrator | 2025-06-22 20:09:23 | INFO  | Task 6f62686b-c4c9-497d-93c9-7e5f0b2c1f20 is in state STARTED 2025-06-22 20:09:23.420106 | orchestrator | 2025-06-22 20:09:23 | INFO  | Task 4a0f0d72-b0bb-4180-a478-51f31e6edf32 is in state STARTED 2025-06-22 20:09:23.420146 | orchestrator | 2025-06-22 20:09:23 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:09:26.468914 | orchestrator | 2025-06-22 20:09:26 | INFO  | Task e32660a2-c720-4e13-8ac4-b8cb4d631c69 is in state STARTED 2025-06-22 20:09:26.470201 | orchestrator | 2025-06-22 20:09:26 | INFO  | Task 9ad8b6cf-4278-45ca-8d5b-2f3dea7e4b25 is in state STARTED 2025-06-22 20:09:26.472009 | orchestrator | 2025-06-22 20:09:26 | INFO  | Task 6f62686b-c4c9-497d-93c9-7e5f0b2c1f20 is in state STARTED 2025-06-22 20:09:26.473583 | orchestrator | 2025-06-22 20:09:26 | INFO  | Task 4a0f0d72-b0bb-4180-a478-51f31e6edf32 is in state STARTED 2025-06-22 20:09:26.473624 | orchestrator | 2025-06-22 20:09:26 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:09:29.513709 | orchestrator | 2025-06-22 20:09:29 | INFO  | Task e32660a2-c720-4e13-8ac4-b8cb4d631c69 is in state STARTED 2025-06-22 20:09:29.513880 | orchestrator | 2025-06-22 20:09:29 | INFO  | Task b2c6a072-e0d1-4613-992c-f570f82e626a is in state STARTED 2025-06-22 20:09:29.513901 | orchestrator | 2025-06-22 20:09:29 | INFO  | Task 9ad8b6cf-4278-45ca-8d5b-2f3dea7e4b25 is in state STARTED 2025-06-22 20:09:29.513918 | orchestrator | 2025-06-22 20:09:29 | INFO  | Task 6f62686b-c4c9-497d-93c9-7e5f0b2c1f20 is in state SUCCESS 2025-06-22 20:09:29.514163 | orchestrator | 2025-06-22 20:09:29.514188 | orchestrator | 2025-06-22 20:09:29.514205 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 20:09:29.514222 | orchestrator | 2025-06-22 20:09:29.514238 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 20:09:29.514255 | orchestrator | Sunday 22 June 2025 20:08:31 +0000 (0:00:00.182) 0:00:00.182 *********** 2025-06-22 20:09:29.514271 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:09:29.514287 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:09:29.514319 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:09:29.514335 | orchestrator | 2025-06-22 20:09:29.514352 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 20:09:29.514367 | orchestrator | Sunday 22 June 2025 20:08:31 +0000 (0:00:00.300) 0:00:00.483 *********** 2025-06-22 20:09:29.514384 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-06-22 20:09:29.514400 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-06-22 20:09:29.514416 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-06-22 20:09:29.514432 | orchestrator | 2025-06-22 20:09:29.514448 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-06-22 20:09:29.514464 | orchestrator | 2025-06-22 20:09:29.514480 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-06-22 20:09:29.514496 | orchestrator | Sunday 22 June 2025 20:08:32 +0000 (0:00:00.506) 0:00:00.990 *********** 2025-06-22 20:09:29.514513 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:09:29.514529 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:09:29.514545 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:09:29.514562 | orchestrator | 2025-06-22 20:09:29.514577 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 20:09:29.514593 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 20:09:29.514611 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 20:09:29.514626 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 20:09:29.514640 | orchestrator | 2025-06-22 20:09:29.514653 | orchestrator | 2025-06-22 20:09:29.514666 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 20:09:29.514679 | orchestrator | Sunday 22 June 2025 20:08:32 +0000 (0:00:00.631) 0:00:01.622 *********** 2025-06-22 20:09:29.514693 | orchestrator | =============================================================================== 2025-06-22 20:09:29.514706 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.63s 2025-06-22 20:09:29.514719 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.51s 2025-06-22 20:09:29.514732 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.30s 2025-06-22 20:09:29.514745 | orchestrator | 2025-06-22 20:09:29.514758 | orchestrator | 2025-06-22 20:09:29.514772 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-06-22 20:09:29.514809 | orchestrator | 2025-06-22 20:09:29.514825 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-06-22 20:09:29.514842 | orchestrator | Sunday 22 June 2025 20:05:08 +0000 (0:00:00.095) 0:00:00.095 *********** 2025-06-22 20:09:29.514858 | orchestrator | changed: [localhost] 2025-06-22 20:09:29.514918 | orchestrator | 2025-06-22 20:09:29.514935 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-06-22 20:09:29.514952 | orchestrator | Sunday 22 June 2025 20:05:09 +0000 (0:00:01.151) 0:00:01.247 *********** 2025-06-22 20:09:29.514968 | orchestrator | 2025-06-22 20:09:29.514985 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-06-22 20:09:29.515030 | orchestrator | 2025-06-22 20:09:29.515065 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-06-22 20:09:29.515084 | orchestrator | changed: [localhost] 2025-06-22 20:09:29.515100 | orchestrator | 2025-06-22 20:09:29.515116 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-06-22 20:09:29.515133 | orchestrator | Sunday 22 June 2025 20:08:32 +0000 (0:03:23.393) 0:03:24.640 *********** 2025-06-22 20:09:29.515146 | orchestrator | changed: [localhost] 2025-06-22 20:09:29.515159 | orchestrator | 2025-06-22 20:09:29.515172 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 20:09:29.515185 | orchestrator | 2025-06-22 20:09:29.515198 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 20:09:29.515212 | orchestrator | Sunday 22 June 2025 20:08:48 +0000 (0:00:16.023) 0:03:40.664 *********** 2025-06-22 20:09:29.515225 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:09:29.515238 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:09:29.515251 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:09:29.515264 | orchestrator | 2025-06-22 20:09:29.515277 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 20:09:29.515290 | orchestrator | Sunday 22 June 2025 20:08:49 +0000 (0:00:00.831) 0:03:41.495 *********** 2025-06-22 20:09:29.515303 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2025-06-22 20:09:29.515317 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2025-06-22 20:09:29.515330 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2025-06-22 20:09:29.515343 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2025-06-22 20:09:29.515357 | orchestrator | 2025-06-22 20:09:29.515370 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2025-06-22 20:09:29.515384 | orchestrator | skipping: no hosts matched 2025-06-22 20:09:29.515398 | orchestrator | 2025-06-22 20:09:29.515410 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 20:09:29.515424 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 20:09:29.515450 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 20:09:29.515463 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 20:09:29.515477 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 20:09:29.515490 | orchestrator | 2025-06-22 20:09:29.515503 | orchestrator | 2025-06-22 20:09:29.515522 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 20:09:29.515535 | orchestrator | Sunday 22 June 2025 20:08:50 +0000 (0:00:01.047) 0:03:42.543 *********** 2025-06-22 20:09:29.515548 | orchestrator | =============================================================================== 2025-06-22 20:09:29.515561 | orchestrator | Download ironic-agent initramfs --------------------------------------- 203.39s 2025-06-22 20:09:29.515574 | orchestrator | Download ironic-agent kernel ------------------------------------------- 16.02s 2025-06-22 20:09:29.515596 | orchestrator | Ensure the destination directory exists --------------------------------- 1.15s 2025-06-22 20:09:29.515609 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.05s 2025-06-22 20:09:29.515623 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.83s 2025-06-22 20:09:29.515636 | orchestrator | 2025-06-22 20:09:29.515649 | orchestrator | 2025-06-22 20:09:29.515662 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 20:09:29.515675 | orchestrator | 2025-06-22 20:09:29.515688 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 20:09:29.515700 | orchestrator | Sunday 22 June 2025 20:08:58 +0000 (0:00:00.732) 0:00:00.732 *********** 2025-06-22 20:09:29.515713 | orchestrator | ok: [testbed-manager] 2025-06-22 20:09:29.515727 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:09:29.515740 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:09:29.515753 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:09:29.515766 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:09:29.515778 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:09:29.515791 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:09:29.515803 | orchestrator | 2025-06-22 20:09:29.515817 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 20:09:29.515830 | orchestrator | Sunday 22 June 2025 20:08:59 +0000 (0:00:01.234) 0:00:01.966 *********** 2025-06-22 20:09:29.515842 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-06-22 20:09:29.515856 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-06-22 20:09:29.515870 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-06-22 20:09:29.515883 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-06-22 20:09:29.515896 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-06-22 20:09:29.515909 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-06-22 20:09:29.515922 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-06-22 20:09:29.515935 | orchestrator | 2025-06-22 20:09:29.515948 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-06-22 20:09:29.515961 | orchestrator | 2025-06-22 20:09:29.515975 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-06-22 20:09:29.515988 | orchestrator | Sunday 22 June 2025 20:09:00 +0000 (0:00:01.000) 0:00:02.967 *********** 2025-06-22 20:09:29.516001 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:09:29.516015 | orchestrator | 2025-06-22 20:09:29.516029 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-06-22 20:09:29.516042 | orchestrator | Sunday 22 June 2025 20:09:02 +0000 (0:00:01.639) 0:00:04.607 *********** 2025-06-22 20:09:29.516076 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2025-06-22 20:09:29.516090 | orchestrator | 2025-06-22 20:09:29.516103 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-06-22 20:09:29.516116 | orchestrator | Sunday 22 June 2025 20:09:05 +0000 (0:00:03.421) 0:00:08.028 *********** 2025-06-22 20:09:29.516129 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-06-22 20:09:29.516143 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-06-22 20:09:29.516156 | orchestrator | 2025-06-22 20:09:29.516169 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-06-22 20:09:29.516182 | orchestrator | Sunday 22 June 2025 20:09:11 +0000 (0:00:06.209) 0:00:14.238 *********** 2025-06-22 20:09:29.516195 | orchestrator | ok: [testbed-manager] => (item=service) 2025-06-22 20:09:29.516208 | orchestrator | 2025-06-22 20:09:29.516221 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-06-22 20:09:29.516234 | orchestrator | Sunday 22 June 2025 20:09:14 +0000 (0:00:03.081) 0:00:17.319 *********** 2025-06-22 20:09:29.516255 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-22 20:09:29.516268 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2025-06-22 20:09:29.516282 | orchestrator | 2025-06-22 20:09:29.516295 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-06-22 20:09:29.516308 | orchestrator | Sunday 22 June 2025 20:09:18 +0000 (0:00:03.628) 0:00:20.948 *********** 2025-06-22 20:09:29.516321 | orchestrator | ok: [testbed-manager] => (item=admin) 2025-06-22 20:09:29.516335 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2025-06-22 20:09:29.516347 | orchestrator | 2025-06-22 20:09:29.516360 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-06-22 20:09:29.516373 | orchestrator | Sunday 22 June 2025 20:09:23 +0000 (0:00:04.964) 0:00:25.912 *********** 2025-06-22 20:09:29.516394 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2025-06-22 20:09:29.516406 | orchestrator | 2025-06-22 20:09:29.516420 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 20:09:29.516434 | orchestrator | testbed-manager : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 20:09:29.516452 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 20:09:29.516466 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 20:09:29.516479 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 20:09:29.516493 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 20:09:29.516506 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 20:09:29.516519 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 20:09:29.516532 | orchestrator | 2025-06-22 20:09:29.516545 | orchestrator | 2025-06-22 20:09:29.516558 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 20:09:29.516572 | orchestrator | Sunday 22 June 2025 20:09:27 +0000 (0:00:03.899) 0:00:29.812 *********** 2025-06-22 20:09:29.516585 | orchestrator | =============================================================================== 2025-06-22 20:09:29.516598 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.21s 2025-06-22 20:09:29.516612 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 4.96s 2025-06-22 20:09:29.516625 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 3.90s 2025-06-22 20:09:29.516638 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.63s 2025-06-22 20:09:29.516651 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.42s 2025-06-22 20:09:29.516664 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.08s 2025-06-22 20:09:29.516677 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.64s 2025-06-22 20:09:29.516690 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.23s 2025-06-22 20:09:29.516703 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.00s 2025-06-22 20:09:29.516806 | orchestrator | 2025-06-22 20:09:29 | INFO  | Task 4a0f0d72-b0bb-4180-a478-51f31e6edf32 is in state STARTED 2025-06-22 20:09:29.516821 | orchestrator | 2025-06-22 20:09:29 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:09:32.547627 | orchestrator | 2025-06-22 20:09:32 | INFO  | Task e32660a2-c720-4e13-8ac4-b8cb4d631c69 is in state STARTED 2025-06-22 20:09:32.547691 | orchestrator | 2025-06-22 20:09:32 | INFO  | Task b2c6a072-e0d1-4613-992c-f570f82e626a is in state STARTED 2025-06-22 20:09:32.548404 | orchestrator | 2025-06-22 20:09:32 | INFO  | Task 9ad8b6cf-4278-45ca-8d5b-2f3dea7e4b25 is in state STARTED 2025-06-22 20:09:32.549278 | orchestrator | 2025-06-22 20:09:32 | INFO  | Task 4a0f0d72-b0bb-4180-a478-51f31e6edf32 is in state STARTED 2025-06-22 20:09:32.549352 | orchestrator | 2025-06-22 20:09:32 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:09:35.599758 | orchestrator | 2025-06-22 20:09:35 | INFO  | Task e32660a2-c720-4e13-8ac4-b8cb4d631c69 is in state STARTED 2025-06-22 20:09:35.601400 | orchestrator | 2025-06-22 20:09:35 | INFO  | Task b2c6a072-e0d1-4613-992c-f570f82e626a is in state STARTED 2025-06-22 20:09:35.603548 | orchestrator | 2025-06-22 20:09:35 | INFO  | Task 9ad8b6cf-4278-45ca-8d5b-2f3dea7e4b25 is in state STARTED 2025-06-22 20:09:35.605320 | orchestrator | 2025-06-22 20:09:35 | INFO  | Task 4a0f0d72-b0bb-4180-a478-51f31e6edf32 is in state STARTED 2025-06-22 20:09:35.605365 | orchestrator | 2025-06-22 20:09:35 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:09:38.651510 | orchestrator | 2025-06-22 20:09:38 | INFO  | Task e32660a2-c720-4e13-8ac4-b8cb4d631c69 is in state STARTED 2025-06-22 20:09:38.653861 | orchestrator | 2025-06-22 20:09:38 | INFO  | Task b2c6a072-e0d1-4613-992c-f570f82e626a is in state STARTED 2025-06-22 20:09:38.658437 | orchestrator | 2025-06-22 20:09:38 | INFO  | Task 9ad8b6cf-4278-45ca-8d5b-2f3dea7e4b25 is in state STARTED 2025-06-22 20:09:38.660880 | orchestrator | 2025-06-22 20:09:38 | INFO  | Task 4a0f0d72-b0bb-4180-a478-51f31e6edf32 is in state STARTED 2025-06-22 20:09:38.661524 | orchestrator | 2025-06-22 20:09:38 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:09:41.712106 | orchestrator | 2025-06-22 20:09:41 | INFO  | Task e32660a2-c720-4e13-8ac4-b8cb4d631c69 is in state STARTED 2025-06-22 20:09:41.714590 | orchestrator | 2025-06-22 20:09:41 | INFO  | Task b2c6a072-e0d1-4613-992c-f570f82e626a is in state STARTED 2025-06-22 20:09:41.717337 | orchestrator | 2025-06-22 20:09:41 | INFO  | Task 9ad8b6cf-4278-45ca-8d5b-2f3dea7e4b25 is in state STARTED 2025-06-22 20:09:41.723284 | orchestrator | 2025-06-22 20:09:41.723351 | orchestrator | 2025-06-22 20:09:41.723381 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 20:09:41.723394 | orchestrator | 2025-06-22 20:09:41.723406 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 20:09:41.723417 | orchestrator | Sunday 22 June 2025 20:05:08 +0000 (0:00:00.414) 0:00:00.414 *********** 2025-06-22 20:09:41.723500 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:09:41.723514 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:09:41.723525 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:09:41.723537 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:09:41.723548 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:09:41.723559 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:09:41.723570 | orchestrator | 2025-06-22 20:09:41.723581 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 20:09:41.723593 | orchestrator | Sunday 22 June 2025 20:05:09 +0000 (0:00:00.759) 0:00:01.173 *********** 2025-06-22 20:09:41.723604 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-06-22 20:09:41.723615 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-06-22 20:09:41.723626 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-06-22 20:09:41.723638 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-06-22 20:09:41.723649 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-06-22 20:09:41.723660 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-06-22 20:09:41.723698 | orchestrator | 2025-06-22 20:09:41.723720 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-06-22 20:09:41.723740 | orchestrator | 2025-06-22 20:09:41.723927 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-06-22 20:09:41.723948 | orchestrator | Sunday 22 June 2025 20:05:10 +0000 (0:00:00.662) 0:00:01.835 *********** 2025-06-22 20:09:41.723969 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 20:09:41.723991 | orchestrator | 2025-06-22 20:09:41.724012 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-06-22 20:09:41.724031 | orchestrator | Sunday 22 June 2025 20:05:11 +0000 (0:00:01.082) 0:00:02.918 *********** 2025-06-22 20:09:41.724044 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:09:41.724088 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:09:41.724139 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:09:41.724151 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:09:41.724162 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:09:41.724172 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:09:41.724183 | orchestrator | 2025-06-22 20:09:41.724194 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-06-22 20:09:41.724206 | orchestrator | Sunday 22 June 2025 20:05:12 +0000 (0:00:01.210) 0:00:04.128 *********** 2025-06-22 20:09:41.724217 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:09:41.724228 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:09:41.724239 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:09:41.724250 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:09:41.724261 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:09:41.724271 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:09:41.724282 | orchestrator | 2025-06-22 20:09:41.724293 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-06-22 20:09:41.724304 | orchestrator | Sunday 22 June 2025 20:05:13 +0000 (0:00:01.092) 0:00:05.221 *********** 2025-06-22 20:09:41.724315 | orchestrator | ok: [testbed-node-0] => { 2025-06-22 20:09:41.724327 | orchestrator |  "changed": false, 2025-06-22 20:09:41.724338 | orchestrator |  "msg": "All assertions passed" 2025-06-22 20:09:41.724349 | orchestrator | } 2025-06-22 20:09:41.724360 | orchestrator | ok: [testbed-node-1] => { 2025-06-22 20:09:41.724371 | orchestrator |  "changed": false, 2025-06-22 20:09:41.724383 | orchestrator |  "msg": "All assertions passed" 2025-06-22 20:09:41.724394 | orchestrator | } 2025-06-22 20:09:41.724404 | orchestrator | ok: [testbed-node-2] => { 2025-06-22 20:09:41.724415 | orchestrator |  "changed": false, 2025-06-22 20:09:41.724426 | orchestrator |  "msg": "All assertions passed" 2025-06-22 20:09:41.724437 | orchestrator | } 2025-06-22 20:09:41.724448 | orchestrator | ok: [testbed-node-3] => { 2025-06-22 20:09:41.724458 | orchestrator |  "changed": false, 2025-06-22 20:09:41.724469 | orchestrator |  "msg": "All assertions passed" 2025-06-22 20:09:41.724480 | orchestrator | } 2025-06-22 20:09:41.724491 | orchestrator | ok: [testbed-node-4] => { 2025-06-22 20:09:41.724502 | orchestrator |  "changed": false, 2025-06-22 20:09:41.724513 | orchestrator |  "msg": "All assertions passed" 2025-06-22 20:09:41.724524 | orchestrator | } 2025-06-22 20:09:41.724535 | orchestrator | ok: [testbed-node-5] => { 2025-06-22 20:09:41.724545 | orchestrator |  "changed": false, 2025-06-22 20:09:41.724556 | orchestrator |  "msg": "All assertions passed" 2025-06-22 20:09:41.724567 | orchestrator | } 2025-06-22 20:09:41.724578 | orchestrator | 2025-06-22 20:09:41.724589 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-06-22 20:09:41.724600 | orchestrator | Sunday 22 June 2025 20:05:14 +0000 (0:00:00.666) 0:00:05.887 *********** 2025-06-22 20:09:41.724611 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:09:41.724622 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:09:41.724632 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:09:41.724643 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:09:41.724654 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:09:41.724676 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:09:41.724687 | orchestrator | 2025-06-22 20:09:41.724698 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-06-22 20:09:41.724709 | orchestrator | Sunday 22 June 2025 20:05:14 +0000 (0:00:00.524) 0:00:06.412 *********** 2025-06-22 20:09:41.724720 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-06-22 20:09:41.724731 | orchestrator | 2025-06-22 20:09:41.724742 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-06-22 20:09:41.724753 | orchestrator | Sunday 22 June 2025 20:05:18 +0000 (0:00:03.464) 0:00:09.877 *********** 2025-06-22 20:09:41.724765 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-06-22 20:09:41.724776 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-06-22 20:09:41.724787 | orchestrator | 2025-06-22 20:09:41.724814 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-06-22 20:09:41.724833 | orchestrator | Sunday 22 June 2025 20:05:25 +0000 (0:00:06.881) 0:00:16.759 *********** 2025-06-22 20:09:41.724844 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-22 20:09:41.724855 | orchestrator | 2025-06-22 20:09:41.724866 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-06-22 20:09:41.724877 | orchestrator | Sunday 22 June 2025 20:05:28 +0000 (0:00:03.380) 0:00:20.140 *********** 2025-06-22 20:09:41.724888 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-22 20:09:41.724899 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-06-22 20:09:41.724910 | orchestrator | 2025-06-22 20:09:41.724921 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-06-22 20:09:41.724932 | orchestrator | Sunday 22 June 2025 20:05:32 +0000 (0:00:04.003) 0:00:24.143 *********** 2025-06-22 20:09:41.724943 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-22 20:09:41.724954 | orchestrator | 2025-06-22 20:09:41.724965 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-06-22 20:09:41.724976 | orchestrator | Sunday 22 June 2025 20:05:36 +0000 (0:00:03.714) 0:00:27.857 *********** 2025-06-22 20:09:41.724987 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-06-22 20:09:41.724997 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-06-22 20:09:41.725008 | orchestrator | 2025-06-22 20:09:41.725019 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-06-22 20:09:41.725030 | orchestrator | Sunday 22 June 2025 20:05:43 +0000 (0:00:07.757) 0:00:35.615 *********** 2025-06-22 20:09:41.725041 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:09:41.725067 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:09:41.725079 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:09:41.725089 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:09:41.725100 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:09:41.725111 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:09:41.725122 | orchestrator | 2025-06-22 20:09:41.725132 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-06-22 20:09:41.725143 | orchestrator | Sunday 22 June 2025 20:05:44 +0000 (0:00:00.683) 0:00:36.298 *********** 2025-06-22 20:09:41.725154 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:09:41.725165 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:09:41.725175 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:09:41.725187 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:09:41.725205 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:09:41.725220 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:09:41.725231 | orchestrator | 2025-06-22 20:09:41.725242 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-06-22 20:09:41.725253 | orchestrator | Sunday 22 June 2025 20:05:46 +0000 (0:00:02.134) 0:00:38.433 *********** 2025-06-22 20:09:41.725282 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:09:41.725294 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:09:41.725304 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:09:41.725315 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:09:41.725326 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:09:41.725337 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:09:41.725348 | orchestrator | 2025-06-22 20:09:41.725359 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-06-22 20:09:41.725370 | orchestrator | Sunday 22 June 2025 20:05:47 +0000 (0:00:01.023) 0:00:39.457 *********** 2025-06-22 20:09:41.725381 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:09:41.725392 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:09:41.725403 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:09:41.725414 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:09:41.725425 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:09:41.725436 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:09:41.725446 | orchestrator | 2025-06-22 20:09:41.725457 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-06-22 20:09:41.725468 | orchestrator | Sunday 22 June 2025 20:05:50 +0000 (0:00:02.383) 0:00:41.840 *********** 2025-06-22 20:09:41.725483 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-22 20:09:41.725513 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-22 20:09:41.725526 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-22 20:09:41.725539 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-22 20:09:41.725558 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-22 20:09:41.725570 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-22 20:09:41.725581 | orchestrator | 2025-06-22 20:09:41.725593 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-06-22 20:09:41.725604 | orchestrator | Sunday 22 June 2025 20:05:53 +0000 (0:00:03.088) 0:00:44.929 *********** 2025-06-22 20:09:41.725615 | orchestrator | [WARNING]: Skipped 2025-06-22 20:09:41.725626 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-06-22 20:09:41.725638 | orchestrator | due to this access issue: 2025-06-22 20:09:41.725649 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-06-22 20:09:41.725659 | orchestrator | a directory 2025-06-22 20:09:41.725670 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-22 20:09:41.725681 | orchestrator | 2025-06-22 20:09:41.725698 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-06-22 20:09:41.725713 | orchestrator | Sunday 22 June 2025 20:05:54 +0000 (0:00:00.919) 0:00:45.848 *********** 2025-06-22 20:09:41.725725 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 20:09:41.725737 | orchestrator | 2025-06-22 20:09:41.725748 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-06-22 20:09:41.725759 | orchestrator | Sunday 22 June 2025 20:05:55 +0000 (0:00:01.512) 0:00:47.360 *********** 2025-06-22 20:09:41.725771 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-22 20:09:41.725789 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-22 20:09:41.725802 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-22 20:09:41.725814 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-22 20:09:41.725837 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-22 20:09:41.725850 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-22 20:09:41.725867 | orchestrator | 2025-06-22 20:09:41.725879 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-06-22 20:09:41.725889 | orchestrator | Sunday 22 June 2025 20:06:00 +0000 (0:00:04.679) 0:00:52.040 *********** 2025-06-22 20:09:41.725901 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-22 20:09:41.725913 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:09:41.725925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-22 20:09:41.725936 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:09:41.725957 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 20:09:41.725969 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:09:41.725981 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 20:09:41.725998 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:09:41.726009 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-22 20:09:41.726099 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:09:41.726112 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 20:09:41.726123 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:09:41.726134 | orchestrator | 2025-06-22 20:09:41.726145 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-06-22 20:09:41.726156 | orchestrator | Sunday 22 June 2025 20:06:03 +0000 (0:00:03.210) 0:00:55.250 *********** 2025-06-22 20:09:41.726168 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-22 20:09:41.726179 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:09:41.726205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-22 20:09:41.726224 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:09:41.726236 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 20:09:41.726247 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:09:41.726259 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-22 20:09:41.726270 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:09:41.726281 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 20:09:41.726293 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:09:41.726304 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 20:09:41.726321 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:09:41.726332 | orchestrator | 2025-06-22 20:09:41.726343 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-06-22 20:09:41.726373 | orchestrator | Sunday 22 June 2025 20:06:07 +0000 (0:00:03.742) 0:00:58.993 *********** 2025-06-22 20:09:41.726394 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:09:41.726415 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:09:41.726435 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:09:41.726447 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:09:41.726458 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:09:41.726468 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:09:41.726479 | orchestrator | 2025-06-22 20:09:41.726490 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-06-22 20:09:41.726501 | orchestrator | Sunday 22 June 2025 20:06:10 +0000 (0:00:03.250) 0:01:02.244 *********** 2025-06-22 20:09:41.726512 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:09:41.726523 | orchestrator | 2025-06-22 20:09:41.726534 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-06-22 20:09:41.726545 | orchestrator | Sunday 22 June 2025 20:06:10 +0000 (0:00:00.143) 0:01:02.387 *********** 2025-06-22 20:09:41.726556 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:09:41.726567 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:09:41.726577 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:09:41.726588 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:09:41.726599 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:09:41.726609 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:09:41.726620 | orchestrator | 2025-06-22 20:09:41.726635 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-06-22 20:09:41.726655 | orchestrator | Sunday 22 June 2025 20:06:11 +0000 (0:00:00.996) 0:01:03.384 *********** 2025-06-22 20:09:41.726675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-22 20:09:41.726696 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:09:41.726716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-22 20:09:41.726728 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:09:41.726740 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-22 20:09:41.726758 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:09:41.726781 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 20:09:41.726793 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:09:41.726804 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 20:09:41.726815 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:09:41.726826 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 20:09:41.726838 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:09:41.726849 | orchestrator | 2025-06-22 20:09:41.726860 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-06-22 20:09:41.726871 | orchestrator | Sunday 22 June 2025 20:06:16 +0000 (0:00:04.460) 0:01:07.845 *********** 2025-06-22 20:09:41.726882 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-22 20:09:41.726914 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-22 20:09:41.726927 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-22 20:09:41.726939 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-22 20:09:41.726951 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-22 20:09:41.726968 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-22 20:09:41.726980 | orchestrator | 2025-06-22 20:09:41.726991 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-06-22 20:09:41.727002 | orchestrator | Sunday 22 June 2025 20:06:20 +0000 (0:00:04.354) 0:01:12.199 *********** 2025-06-22 20:09:41.727024 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-22 20:09:41.727036 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-22 20:09:41.727048 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-22 20:09:41.727077 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-22 20:09:41.727095 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-22 20:09:41.727117 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-22 20:09:41.727129 | orchestrator | 2025-06-22 20:09:41.727140 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-06-22 20:09:41.727152 | orchestrator | Sunday 22 June 2025 20:06:27 +0000 (0:00:06.575) 0:01:18.775 *********** 2025-06-22 20:09:41.727163 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 20:09:41.727175 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:09:41.727186 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 20:09:41.727198 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:09:41.727209 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 20:09:41.727226 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:09:41.727237 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-22 20:09:41.727260 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-22 20:09:41.727272 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-22 20:09:41.727284 | orchestrator | 2025-06-22 20:09:41.727295 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-06-22 20:09:41.727306 | orchestrator | Sunday 22 June 2025 20:06:29 +0000 (0:00:02.935) 0:01:21.711 *********** 2025-06-22 20:09:41.727317 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:09:41.727328 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:09:41.727339 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:09:41.727361 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:09:41.727372 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:09:41.727383 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:09:41.727394 | orchestrator | 2025-06-22 20:09:41.727405 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-06-22 20:09:41.727416 | orchestrator | Sunday 22 June 2025 20:06:33 +0000 (0:00:03.244) 0:01:24.956 *********** 2025-06-22 20:09:41.727427 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 20:09:41.727439 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:09:41.727450 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 20:09:41.727462 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:09:41.727484 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 20:09:41.727496 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:09:41.727507 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-22 20:09:41.727519 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-22 20:09:41.727537 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-22 20:09:41.727549 | orchestrator | 2025-06-22 20:09:41.727559 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-06-22 20:09:41.727571 | orchestrator | Sunday 22 June 2025 20:06:37 +0000 (0:00:04.595) 0:01:29.552 *********** 2025-06-22 20:09:41.727582 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:09:41.727592 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:09:41.727603 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:09:41.727614 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:09:41.727625 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:09:41.727636 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:09:41.727647 | orchestrator | 2025-06-22 20:09:41.727658 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-06-22 20:09:41.727669 | orchestrator | Sunday 22 June 2025 20:06:40 +0000 (0:00:02.732) 0:01:32.284 *********** 2025-06-22 20:09:41.727680 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:09:41.727691 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:09:41.727702 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:09:41.727713 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:09:41.727723 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:09:41.727734 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:09:41.727745 | orchestrator | 2025-06-22 20:09:41.727756 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-06-22 20:09:41.727767 | orchestrator | Sunday 22 June 2025 20:06:43 +0000 (0:00:02.560) 0:01:34.845 *********** 2025-06-22 20:09:41.727778 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:09:41.727789 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:09:41.727800 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:09:41.727816 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:09:41.727832 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:09:41.727843 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:09:41.727854 | orchestrator | 2025-06-22 20:09:41.727865 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-06-22 20:09:41.727876 | orchestrator | Sunday 22 June 2025 20:06:46 +0000 (0:00:03.638) 0:01:38.483 *********** 2025-06-22 20:09:41.727886 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:09:41.727902 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:09:41.727919 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:09:41.727930 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:09:41.727941 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:09:41.727952 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:09:41.727963 | orchestrator | 2025-06-22 20:09:41.727974 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-06-22 20:09:41.727985 | orchestrator | Sunday 22 June 2025 20:06:49 +0000 (0:00:03.027) 0:01:41.510 *********** 2025-06-22 20:09:41.727996 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:09:41.728007 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:09:41.728018 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:09:41.728029 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:09:41.728040 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:09:41.728050 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:09:41.728075 | orchestrator | 2025-06-22 20:09:41.728086 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-06-22 20:09:41.728097 | orchestrator | Sunday 22 June 2025 20:06:51 +0000 (0:00:01.960) 0:01:43.471 *********** 2025-06-22 20:09:41.728108 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:09:41.728118 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:09:41.728129 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:09:41.728140 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:09:41.728151 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:09:41.728162 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:09:41.728173 | orchestrator | 2025-06-22 20:09:41.728183 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-06-22 20:09:41.728194 | orchestrator | Sunday 22 June 2025 20:06:53 +0000 (0:00:02.123) 0:01:45.594 *********** 2025-06-22 20:09:41.728205 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-22 20:09:41.728216 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:09:41.728227 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-22 20:09:41.728238 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:09:41.728249 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-22 20:09:41.728259 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:09:41.728270 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-22 20:09:41.728281 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:09:41.728293 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-22 20:09:41.728303 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:09:41.728314 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-06-22 20:09:41.728325 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:09:41.728336 | orchestrator | 2025-06-22 20:09:41.728347 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-06-22 20:09:41.728358 | orchestrator | Sunday 22 June 2025 20:06:56 +0000 (0:00:02.397) 0:01:47.992 *********** 2025-06-22 20:09:41.728370 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-22 20:09:41.728388 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:09:41.728410 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-22 20:09:41.728423 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:09:41.728434 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 20:09:41.728445 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:09:41.728456 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-22 20:09:41.728468 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:09:41.728550 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 20:09:41.728563 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:09:41.728575 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 20:09:41.728597 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:09:41.728608 | orchestrator | 2025-06-22 20:09:41.728619 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-06-22 20:09:41.728631 | orchestrator | Sunday 22 June 2025 20:06:58 +0000 (0:00:02.052) 0:01:50.045 *********** 2025-06-22 20:09:41.728654 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-22 20:09:41.728667 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:09:41.728694 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-22 20:09:41.728706 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:09:41.728718 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-22 20:09:41.728729 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:09:41.728740 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 20:09:41.728758 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:09:41.728770 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 20:09:41.728782 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:09:41.728804 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 20:09:41.728816 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:09:41.728827 | orchestrator | 2025-06-22 20:09:41.728838 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-06-22 20:09:41.728849 | orchestrator | Sunday 22 June 2025 20:07:00 +0000 (0:00:01.954) 0:01:52.000 *********** 2025-06-22 20:09:41.728860 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:09:41.728871 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:09:41.728882 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:09:41.728892 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:09:41.728903 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:09:41.728914 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:09:41.728925 | orchestrator | 2025-06-22 20:09:41.728936 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-06-22 20:09:41.728947 | orchestrator | Sunday 22 June 2025 20:07:02 +0000 (0:00:02.075) 0:01:54.076 *********** 2025-06-22 20:09:41.728958 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:09:41.728969 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:09:41.728979 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:09:41.728990 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:09:41.729001 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:09:41.729012 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:09:41.729023 | orchestrator | 2025-06-22 20:09:41.729034 | orchestrator | TASK [neutron : Copying over neutron_ovn_vpn_agent.ini] ************************ 2025-06-22 20:09:41.729045 | orchestrator | Sunday 22 June 2025 20:07:07 +0000 (0:00:05.152) 0:01:59.228 *********** 2025-06-22 20:09:41.729072 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:09:41.729083 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:09:41.729094 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:09:41.729111 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:09:41.729122 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:09:41.729133 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:09:41.729144 | orchestrator | 2025-06-22 20:09:41.729155 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-06-22 20:09:41.729166 | orchestrator | Sunday 22 June 2025 20:07:09 +0000 (0:00:02.137) 0:02:01.366 *********** 2025-06-22 20:09:41.729177 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:09:41.729188 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:09:41.729199 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:09:41.729210 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:09:41.729221 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:09:41.729231 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:09:41.729242 | orchestrator | 2025-06-22 20:09:41.729253 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-06-22 20:09:41.729264 | orchestrator | Sunday 22 June 2025 20:07:11 +0000 (0:00:01.840) 0:02:03.206 *********** 2025-06-22 20:09:41.729275 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:09:41.729286 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:09:41.729296 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:09:41.729307 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:09:41.729318 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:09:41.729329 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:09:41.729339 | orchestrator | 2025-06-22 20:09:41.729350 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-06-22 20:09:41.729361 | orchestrator | Sunday 22 June 2025 20:07:13 +0000 (0:00:01.861) 0:02:05.068 *********** 2025-06-22 20:09:41.729372 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:09:41.729383 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:09:41.729394 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:09:41.729404 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:09:41.729415 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:09:41.729426 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:09:41.729436 | orchestrator | 2025-06-22 20:09:41.729447 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-06-22 20:09:41.729458 | orchestrator | Sunday 22 June 2025 20:07:15 +0000 (0:00:02.277) 0:02:07.345 *********** 2025-06-22 20:09:41.729470 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:09:41.729480 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:09:41.729491 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:09:41.729502 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:09:41.729512 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:09:41.729523 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:09:41.729534 | orchestrator | 2025-06-22 20:09:41.729545 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-06-22 20:09:41.729556 | orchestrator | Sunday 22 June 2025 20:07:19 +0000 (0:00:03.829) 0:02:11.175 *********** 2025-06-22 20:09:41.729566 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:09:41.729577 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:09:41.729588 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:09:41.729599 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:09:41.729610 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:09:41.729620 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:09:41.729631 | orchestrator | 2025-06-22 20:09:41.729642 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-06-22 20:09:41.729653 | orchestrator | Sunday 22 June 2025 20:07:22 +0000 (0:00:02.574) 0:02:13.749 *********** 2025-06-22 20:09:41.729664 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:09:41.729681 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:09:41.729698 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:09:41.729709 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:09:41.729720 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:09:41.729737 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:09:41.729748 | orchestrator | 2025-06-22 20:09:41.729759 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-06-22 20:09:41.729770 | orchestrator | Sunday 22 June 2025 20:07:24 +0000 (0:00:02.499) 0:02:16.249 *********** 2025-06-22 20:09:41.729781 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:09:41.729792 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:09:41.729803 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:09:41.729814 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:09:41.729825 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:09:41.729835 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:09:41.729846 | orchestrator | 2025-06-22 20:09:41.729857 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-06-22 20:09:41.729868 | orchestrator | Sunday 22 June 2025 20:07:27 +0000 (0:00:03.095) 0:02:19.344 *********** 2025-06-22 20:09:41.729879 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-22 20:09:41.729891 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:09:41.729902 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-22 20:09:41.729913 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:09:41.729924 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-22 20:09:41.729935 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:09:41.729946 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-22 20:09:41.729957 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:09:41.729968 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-22 20:09:41.729979 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:09:41.729990 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-06-22 20:09:41.730000 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:09:41.730038 | orchestrator | 2025-06-22 20:09:41.730097 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-06-22 20:09:41.730110 | orchestrator | Sunday 22 June 2025 20:07:30 +0000 (0:00:02.747) 0:02:22.092 *********** 2025-06-22 20:09:41.730122 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-22 20:09:41.730133 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:09:41.730145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-22 20:09:41.730164 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:09:41.730189 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-06-22 20:09:41.730201 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:09:41.730212 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 20:09:41.730224 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:09:41.730235 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 20:09:41.730246 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:09:41.730257 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-06-22 20:09:41.730269 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:09:41.730280 | orchestrator | 2025-06-22 20:09:41.730291 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-06-22 20:09:41.730302 | orchestrator | Sunday 22 June 2025 20:07:32 +0000 (0:00:01.842) 0:02:23.934 *********** 2025-06-22 20:09:41.730319 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-22 20:09:41.730343 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-22 20:09:41.730355 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-22 20:09:41.730367 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-22 20:09:41.730379 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-06-22 20:09:41.730397 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-06-22 20:09:41.730408 | orchestrator | 2025-06-22 20:09:41.730420 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-06-22 20:09:41.730440 | orchestrator | Sunday 22 June 2025 20:07:35 +0000 (0:00:03.105) 0:02:27.039 *********** 2025-06-22 20:09:41.730452 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:09:41.730463 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:09:41.730474 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:09:41.730485 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:09:41.730496 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:09:41.730507 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:09:41.730518 | orchestrator | 2025-06-22 20:09:41.730529 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-06-22 20:09:41.730540 | orchestrator | Sunday 22 June 2025 20:07:35 +0000 (0:00:00.616) 0:02:27.656 *********** 2025-06-22 20:09:41.730551 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:09:41.730562 | orchestrator | 2025-06-22 20:09:41.730573 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-06-22 20:09:41.730584 | orchestrator | Sunday 22 June 2025 20:07:38 +0000 (0:00:02.224) 0:02:29.880 *********** 2025-06-22 20:09:41.730595 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:09:41.730606 | orchestrator | 2025-06-22 20:09:41.730617 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-06-22 20:09:41.730628 | orchestrator | Sunday 22 June 2025 20:07:40 +0000 (0:00:02.536) 0:02:32.417 *********** 2025-06-22 20:09:41.730639 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:09:41.730648 | orchestrator | 2025-06-22 20:09:41.730658 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-22 20:09:41.730668 | orchestrator | Sunday 22 June 2025 20:08:21 +0000 (0:00:40.840) 0:03:13.257 *********** 2025-06-22 20:09:41.730678 | orchestrator | 2025-06-22 20:09:41.730687 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-22 20:09:41.730697 | orchestrator | Sunday 22 June 2025 20:08:21 +0000 (0:00:00.064) 0:03:13.322 *********** 2025-06-22 20:09:41.730707 | orchestrator | 2025-06-22 20:09:41.730716 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-22 20:09:41.730726 | orchestrator | Sunday 22 June 2025 20:08:21 +0000 (0:00:00.268) 0:03:13.590 *********** 2025-06-22 20:09:41.730736 | orchestrator | 2025-06-22 20:09:41.730746 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-22 20:09:41.730756 | orchestrator | Sunday 22 June 2025 20:08:21 +0000 (0:00:00.063) 0:03:13.654 *********** 2025-06-22 20:09:41.730765 | orchestrator | 2025-06-22 20:09:41.730775 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-22 20:09:41.730785 | orchestrator | Sunday 22 June 2025 20:08:21 +0000 (0:00:00.064) 0:03:13.719 *********** 2025-06-22 20:09:41.730795 | orchestrator | 2025-06-22 20:09:41.730805 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-06-22 20:09:41.730820 | orchestrator | Sunday 22 June 2025 20:08:22 +0000 (0:00:00.067) 0:03:13.786 *********** 2025-06-22 20:09:41.730829 | orchestrator | 2025-06-22 20:09:41.730839 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-06-22 20:09:41.730849 | orchestrator | Sunday 22 June 2025 20:08:22 +0000 (0:00:00.065) 0:03:13.852 *********** 2025-06-22 20:09:41.730859 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:09:41.730868 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:09:41.730878 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:09:41.730888 | orchestrator | 2025-06-22 20:09:41.730898 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-06-22 20:09:41.730907 | orchestrator | Sunday 22 June 2025 20:08:45 +0000 (0:00:22.982) 0:03:36.834 *********** 2025-06-22 20:09:41.730917 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:09:41.730927 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:09:41.730936 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:09:41.730946 | orchestrator | 2025-06-22 20:09:41.730956 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 20:09:41.730966 | orchestrator | testbed-node-0 : ok=27  changed=16  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-06-22 20:09:41.730976 | orchestrator | testbed-node-1 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-06-22 20:09:41.730986 | orchestrator | testbed-node-2 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-06-22 20:09:41.730996 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-06-22 20:09:41.731006 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-06-22 20:09:41.731016 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-06-22 20:09:41.731025 | orchestrator | 2025-06-22 20:09:41.731035 | orchestrator | 2025-06-22 20:09:41.731045 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 20:09:41.731070 | orchestrator | Sunday 22 June 2025 20:09:40 +0000 (0:00:55.302) 0:04:32.137 *********** 2025-06-22 20:09:41.731080 | orchestrator | =============================================================================== 2025-06-22 20:09:41.731090 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 55.30s 2025-06-22 20:09:41.731100 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 40.84s 2025-06-22 20:09:41.731109 | orchestrator | neutron : Restart neutron-server container ----------------------------- 22.98s 2025-06-22 20:09:41.731119 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 7.76s 2025-06-22 20:09:41.731134 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.88s 2025-06-22 20:09:41.731148 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 6.58s 2025-06-22 20:09:41.731158 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 5.15s 2025-06-22 20:09:41.731168 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 4.68s 2025-06-22 20:09:41.731178 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 4.60s 2025-06-22 20:09:41.731187 | orchestrator | neutron : Copying over existing policy file ----------------------------- 4.46s 2025-06-22 20:09:41.731197 | orchestrator | neutron : Copying over config.json files for services ------------------- 4.35s 2025-06-22 20:09:41.731207 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 4.00s 2025-06-22 20:09:41.731216 | orchestrator | neutron : Copying over ovn_agent.ini ------------------------------------ 3.83s 2025-06-22 20:09:41.731235 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 3.74s 2025-06-22 20:09:41.731245 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.71s 2025-06-22 20:09:41.731255 | orchestrator | neutron : Copying over sriov_agent.ini ---------------------------------- 3.64s 2025-06-22 20:09:41.731265 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.47s 2025-06-22 20:09:41.731274 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.38s 2025-06-22 20:09:41.731284 | orchestrator | neutron : Creating TLS backend PEM File --------------------------------- 3.25s 2025-06-22 20:09:41.731294 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 3.24s 2025-06-22 20:09:41.731304 | orchestrator | 2025-06-22 20:09:41 | INFO  | Task 4a0f0d72-b0bb-4180-a478-51f31e6edf32 is in state SUCCESS 2025-06-22 20:09:41.731314 | orchestrator | 2025-06-22 20:09:41 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:09:44.764746 | orchestrator | 2025-06-22 20:09:44 | INFO  | Task e32660a2-c720-4e13-8ac4-b8cb4d631c69 is in state STARTED 2025-06-22 20:09:44.765812 | orchestrator | 2025-06-22 20:09:44 | INFO  | Task b2c6a072-e0d1-4613-992c-f570f82e626a is in state STARTED 2025-06-22 20:09:44.766389 | orchestrator | 2025-06-22 20:09:44 | INFO  | Task 9ad8b6cf-4278-45ca-8d5b-2f3dea7e4b25 is in state STARTED 2025-06-22 20:09:44.767321 | orchestrator | 2025-06-22 20:09:44 | INFO  | Task 4136a3fe-1dd6-4ce5-bf47-592f07e29278 is in state STARTED 2025-06-22 20:09:44.767347 | orchestrator | 2025-06-22 20:09:44 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:09:47.799280 | orchestrator | 2025-06-22 20:09:47 | INFO  | Task e32660a2-c720-4e13-8ac4-b8cb4d631c69 is in state STARTED 2025-06-22 20:09:47.800164 | orchestrator | 2025-06-22 20:09:47 | INFO  | Task b2c6a072-e0d1-4613-992c-f570f82e626a is in state STARTED 2025-06-22 20:09:47.801210 | orchestrator | 2025-06-22 20:09:47 | INFO  | Task 9ad8b6cf-4278-45ca-8d5b-2f3dea7e4b25 is in state STARTED 2025-06-22 20:09:47.802272 | orchestrator | 2025-06-22 20:09:47 | INFO  | Task 4136a3fe-1dd6-4ce5-bf47-592f07e29278 is in state STARTED 2025-06-22 20:09:47.802457 | orchestrator | 2025-06-22 20:09:47 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:09:50.850338 | orchestrator | 2025-06-22 20:09:50 | INFO  | Task e32660a2-c720-4e13-8ac4-b8cb4d631c69 is in state STARTED 2025-06-22 20:09:50.850827 | orchestrator | 2025-06-22 20:09:50 | INFO  | Task b2c6a072-e0d1-4613-992c-f570f82e626a is in state STARTED 2025-06-22 20:09:50.852152 | orchestrator | 2025-06-22 20:09:50 | INFO  | Task 9ad8b6cf-4278-45ca-8d5b-2f3dea7e4b25 is in state STARTED 2025-06-22 20:09:50.853178 | orchestrator | 2025-06-22 20:09:50 | INFO  | Task 4136a3fe-1dd6-4ce5-bf47-592f07e29278 is in state STARTED 2025-06-22 20:09:50.853209 | orchestrator | 2025-06-22 20:09:50 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:09:53.887982 | orchestrator | 2025-06-22 20:09:53 | INFO  | Task e32660a2-c720-4e13-8ac4-b8cb4d631c69 is in state STARTED 2025-06-22 20:09:53.889472 | orchestrator | 2025-06-22 20:09:53 | INFO  | Task b2c6a072-e0d1-4613-992c-f570f82e626a is in state STARTED 2025-06-22 20:09:53.890765 | orchestrator | 2025-06-22 20:09:53 | INFO  | Task 9ad8b6cf-4278-45ca-8d5b-2f3dea7e4b25 is in state STARTED 2025-06-22 20:09:53.893042 | orchestrator | 2025-06-22 20:09:53 | INFO  | Task 4136a3fe-1dd6-4ce5-bf47-592f07e29278 is in state STARTED 2025-06-22 20:09:53.893069 | orchestrator | 2025-06-22 20:09:53 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:09:56.924084 | orchestrator | 2025-06-22 20:09:56 | INFO  | Task e32660a2-c720-4e13-8ac4-b8cb4d631c69 is in state STARTED 2025-06-22 20:09:56.926348 | orchestrator | 2025-06-22 20:09:56 | INFO  | Task b2c6a072-e0d1-4613-992c-f570f82e626a is in state STARTED 2025-06-22 20:09:56.928308 | orchestrator | 2025-06-22 20:09:56 | INFO  | Task 9ad8b6cf-4278-45ca-8d5b-2f3dea7e4b25 is in state STARTED 2025-06-22 20:09:56.930341 | orchestrator | 2025-06-22 20:09:56 | INFO  | Task 4136a3fe-1dd6-4ce5-bf47-592f07e29278 is in state STARTED 2025-06-22 20:09:56.930584 | orchestrator | 2025-06-22 20:09:56 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:09:59.959532 | orchestrator | 2025-06-22 20:09:59 | INFO  | Task e32660a2-c720-4e13-8ac4-b8cb4d631c69 is in state STARTED 2025-06-22 20:09:59.959616 | orchestrator | 2025-06-22 20:09:59 | INFO  | Task b2c6a072-e0d1-4613-992c-f570f82e626a is in state STARTED 2025-06-22 20:09:59.959630 | orchestrator | 2025-06-22 20:09:59 | INFO  | Task 9ad8b6cf-4278-45ca-8d5b-2f3dea7e4b25 is in state STARTED 2025-06-22 20:09:59.959642 | orchestrator | 2025-06-22 20:09:59 | INFO  | Task 4136a3fe-1dd6-4ce5-bf47-592f07e29278 is in state STARTED 2025-06-22 20:09:59.959653 | orchestrator | 2025-06-22 20:09:59 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:10:03.003462 | orchestrator | 2025-06-22 20:10:02 | INFO  | Task e32660a2-c720-4e13-8ac4-b8cb4d631c69 is in state STARTED 2025-06-22 20:10:03.003567 | orchestrator | 2025-06-22 20:10:03 | INFO  | Task b2c6a072-e0d1-4613-992c-f570f82e626a is in state STARTED 2025-06-22 20:10:03.004834 | orchestrator | 2025-06-22 20:10:03 | INFO  | Task 9ad8b6cf-4278-45ca-8d5b-2f3dea7e4b25 is in state STARTED 2025-06-22 20:10:03.007743 | orchestrator | 2025-06-22 20:10:03 | INFO  | Task 4136a3fe-1dd6-4ce5-bf47-592f07e29278 is in state STARTED 2025-06-22 20:10:03.007793 | orchestrator | 2025-06-22 20:10:03 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:10:06.048691 | orchestrator | 2025-06-22 20:10:06 | INFO  | Task e32660a2-c720-4e13-8ac4-b8cb4d631c69 is in state STARTED 2025-06-22 20:10:06.048789 | orchestrator | 2025-06-22 20:10:06 | INFO  | Task b2c6a072-e0d1-4613-992c-f570f82e626a is in state STARTED 2025-06-22 20:10:06.051770 | orchestrator | 2025-06-22 20:10:06 | INFO  | Task 9ad8b6cf-4278-45ca-8d5b-2f3dea7e4b25 is in state STARTED 2025-06-22 20:10:06.051796 | orchestrator | 2025-06-22 20:10:06 | INFO  | Task 4136a3fe-1dd6-4ce5-bf47-592f07e29278 is in state STARTED 2025-06-22 20:10:06.051809 | orchestrator | 2025-06-22 20:10:06 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:10:09.096683 | orchestrator | 2025-06-22 20:10:09 | INFO  | Task e32660a2-c720-4e13-8ac4-b8cb4d631c69 is in state STARTED 2025-06-22 20:10:09.096776 | orchestrator | 2025-06-22 20:10:09 | INFO  | Task b2c6a072-e0d1-4613-992c-f570f82e626a is in state STARTED 2025-06-22 20:10:09.096792 | orchestrator | 2025-06-22 20:10:09 | INFO  | Task 9ad8b6cf-4278-45ca-8d5b-2f3dea7e4b25 is in state STARTED 2025-06-22 20:10:09.097790 | orchestrator | 2025-06-22 20:10:09 | INFO  | Task 4136a3fe-1dd6-4ce5-bf47-592f07e29278 is in state STARTED 2025-06-22 20:10:09.097834 | orchestrator | 2025-06-22 20:10:09 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:10:12.136018 | orchestrator | 2025-06-22 20:10:12 | INFO  | Task e32660a2-c720-4e13-8ac4-b8cb4d631c69 is in state STARTED 2025-06-22 20:10:12.136568 | orchestrator | 2025-06-22 20:10:12 | INFO  | Task b2c6a072-e0d1-4613-992c-f570f82e626a is in state STARTED 2025-06-22 20:10:12.139721 | orchestrator | 2025-06-22 20:10:12 | INFO  | Task 9ad8b6cf-4278-45ca-8d5b-2f3dea7e4b25 is in state STARTED 2025-06-22 20:10:12.139799 | orchestrator | 2025-06-22 20:10:12 | INFO  | Task 4136a3fe-1dd6-4ce5-bf47-592f07e29278 is in state STARTED 2025-06-22 20:10:12.139839 | orchestrator | 2025-06-22 20:10:12 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:10:15.196458 | orchestrator | 2025-06-22 20:10:15 | INFO  | Task e32660a2-c720-4e13-8ac4-b8cb4d631c69 is in state STARTED 2025-06-22 20:10:15.197920 | orchestrator | 2025-06-22 20:10:15 | INFO  | Task b2c6a072-e0d1-4613-992c-f570f82e626a is in state STARTED 2025-06-22 20:10:15.199499 | orchestrator | 2025-06-22 20:10:15 | INFO  | Task 9ad8b6cf-4278-45ca-8d5b-2f3dea7e4b25 is in state STARTED 2025-06-22 20:10:15.202963 | orchestrator | 2025-06-22 20:10:15 | INFO  | Task 4136a3fe-1dd6-4ce5-bf47-592f07e29278 is in state STARTED 2025-06-22 20:10:15.203948 | orchestrator | 2025-06-22 20:10:15 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:10:18.249679 | orchestrator | 2025-06-22 20:10:18 | INFO  | Task e32660a2-c720-4e13-8ac4-b8cb4d631c69 is in state SUCCESS 2025-06-22 20:10:18.250879 | orchestrator | 2025-06-22 20:10:18.250917 | orchestrator | 2025-06-22 20:10:18.250931 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 20:10:18.250944 | orchestrator | 2025-06-22 20:10:18.251019 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 20:10:18.251034 | orchestrator | Sunday 22 June 2025 20:08:11 +0000 (0:00:00.231) 0:00:00.231 *********** 2025-06-22 20:10:18.251046 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:10:18.251133 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:10:18.251159 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:10:18.251172 | orchestrator | 2025-06-22 20:10:18.251183 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 20:10:18.251202 | orchestrator | Sunday 22 June 2025 20:08:11 +0000 (0:00:00.276) 0:00:00.508 *********** 2025-06-22 20:10:18.251220 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-06-22 20:10:18.251240 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-06-22 20:10:18.251258 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-06-22 20:10:18.251276 | orchestrator | 2025-06-22 20:10:18.251296 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-06-22 20:10:18.251308 | orchestrator | 2025-06-22 20:10:18.251318 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-06-22 20:10:18.251330 | orchestrator | Sunday 22 June 2025 20:08:11 +0000 (0:00:00.364) 0:00:00.872 *********** 2025-06-22 20:10:18.251340 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:10:18.251352 | orchestrator | 2025-06-22 20:10:18.251363 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-06-22 20:10:18.251374 | orchestrator | Sunday 22 June 2025 20:08:12 +0000 (0:00:00.463) 0:00:01.335 *********** 2025-06-22 20:10:18.251386 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-06-22 20:10:18.251397 | orchestrator | 2025-06-22 20:10:18.251408 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-06-22 20:10:18.251418 | orchestrator | Sunday 22 June 2025 20:08:15 +0000 (0:00:03.550) 0:00:04.886 *********** 2025-06-22 20:10:18.251429 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-06-22 20:10:18.251440 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-06-22 20:10:18.251451 | orchestrator | 2025-06-22 20:10:18.251462 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-06-22 20:10:18.251474 | orchestrator | Sunday 22 June 2025 20:08:22 +0000 (0:00:06.341) 0:00:11.227 *********** 2025-06-22 20:10:18.251484 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-22 20:10:18.251495 | orchestrator | 2025-06-22 20:10:18.251506 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-06-22 20:10:18.251517 | orchestrator | Sunday 22 June 2025 20:08:25 +0000 (0:00:03.361) 0:00:14.589 *********** 2025-06-22 20:10:18.251551 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-22 20:10:18.251563 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-06-22 20:10:18.251574 | orchestrator | 2025-06-22 20:10:18.251585 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-06-22 20:10:18.251596 | orchestrator | Sunday 22 June 2025 20:08:29 +0000 (0:00:03.883) 0:00:18.472 *********** 2025-06-22 20:10:18.251607 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-22 20:10:18.251618 | orchestrator | 2025-06-22 20:10:18.251629 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-06-22 20:10:18.251640 | orchestrator | Sunday 22 June 2025 20:08:33 +0000 (0:00:03.543) 0:00:22.015 *********** 2025-06-22 20:10:18.251651 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-06-22 20:10:18.251661 | orchestrator | 2025-06-22 20:10:18.251673 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-06-22 20:10:18.251683 | orchestrator | Sunday 22 June 2025 20:08:37 +0000 (0:00:04.102) 0:00:26.117 *********** 2025-06-22 20:10:18.251694 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:10:18.251705 | orchestrator | 2025-06-22 20:10:18.251716 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-06-22 20:10:18.251727 | orchestrator | Sunday 22 June 2025 20:08:40 +0000 (0:00:03.255) 0:00:29.372 *********** 2025-06-22 20:10:18.251738 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:10:18.251749 | orchestrator | 2025-06-22 20:10:18.251760 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-06-22 20:10:18.251771 | orchestrator | Sunday 22 June 2025 20:08:43 +0000 (0:00:03.496) 0:00:32.869 *********** 2025-06-22 20:10:18.251782 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:10:18.251793 | orchestrator | 2025-06-22 20:10:18.251804 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-06-22 20:10:18.251815 | orchestrator | Sunday 22 June 2025 20:08:47 +0000 (0:00:03.783) 0:00:36.653 *********** 2025-06-22 20:10:18.251844 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-22 20:10:18.251866 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-22 20:10:18.251879 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-22 20:10:18.251899 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:10:18.251911 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:10:18.251930 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:10:18.251942 | orchestrator | 2025-06-22 20:10:18.251953 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-06-22 20:10:18.251969 | orchestrator | Sunday 22 June 2025 20:08:49 +0000 (0:00:01.984) 0:00:38.637 *********** 2025-06-22 20:10:18.251980 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:10:18.251991 | orchestrator | 2025-06-22 20:10:18.252002 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-06-22 20:10:18.252012 | orchestrator | Sunday 22 June 2025 20:08:50 +0000 (0:00:00.323) 0:00:38.961 *********** 2025-06-22 20:10:18.252023 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:10:18.252034 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:10:18.252045 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:10:18.252078 | orchestrator | 2025-06-22 20:10:18.252089 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-06-22 20:10:18.252100 | orchestrator | Sunday 22 June 2025 20:08:50 +0000 (0:00:00.738) 0:00:39.699 *********** 2025-06-22 20:10:18.252118 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-22 20:10:18.252128 | orchestrator | 2025-06-22 20:10:18.252139 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-06-22 20:10:18.252150 | orchestrator | Sunday 22 June 2025 20:08:52 +0000 (0:00:01.403) 0:00:41.103 *********** 2025-06-22 20:10:18.252162 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-22 20:10:18.252174 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-22 20:10:18.252186 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-22 20:10:18.252210 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:10:18.252223 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:10:18.252241 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:10:18.252253 | orchestrator | 2025-06-22 20:10:18.252264 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-06-22 20:10:18.252275 | orchestrator | Sunday 22 June 2025 20:08:55 +0000 (0:00:03.154) 0:00:44.258 *********** 2025-06-22 20:10:18.252286 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:10:18.252297 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:10:18.252308 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:10:18.252319 | orchestrator | 2025-06-22 20:10:18.252330 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-06-22 20:10:18.252341 | orchestrator | Sunday 22 June 2025 20:08:56 +0000 (0:00:01.005) 0:00:45.263 *********** 2025-06-22 20:10:18.252352 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:10:18.252363 | orchestrator | 2025-06-22 20:10:18.252374 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-06-22 20:10:18.252385 | orchestrator | Sunday 22 June 2025 20:08:57 +0000 (0:00:00.945) 0:00:46.209 *********** 2025-06-22 20:10:18.252396 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-22 20:10:18.252420 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-22 20:10:18.252438 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-22 20:10:18.252450 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:10:18.252462 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:10:18.252492 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:10:18.252503 | orchestrator | 2025-06-22 20:10:18.252515 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-06-22 20:10:18.252526 | orchestrator | Sunday 22 June 2025 20:09:00 +0000 (0:00:03.380) 0:00:49.590 *********** 2025-06-22 20:10:18.252549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-22 20:10:18.252567 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-22 20:10:18.252579 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:10:18.252591 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-22 20:10:18.252603 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-22 20:10:18.252614 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:10:18.252625 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-22 20:10:18.252660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-22 20:10:18.252673 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:10:18.252684 | orchestrator | 2025-06-22 20:10:18.252695 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-06-22 20:10:18.252706 | orchestrator | Sunday 22 June 2025 20:09:01 +0000 (0:00:00.767) 0:00:50.358 *********** 2025-06-22 20:10:18.252717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-22 20:10:18.252729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-22 20:10:18.252741 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:10:18.252752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-22 20:10:18.252769 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-22 20:10:18.252788 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:10:18.252803 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-22 20:10:18.252815 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-22 20:10:18.252827 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:10:18.252838 | orchestrator | 2025-06-22 20:10:18.252849 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-06-22 20:10:18.252860 | orchestrator | Sunday 22 June 2025 20:09:02 +0000 (0:00:01.321) 0:00:51.680 *********** 2025-06-22 20:10:18.252871 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-22 20:10:18.252883 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-22 20:10:18.253295 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-22 20:10:18.253328 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:10:18.253340 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:10:18.253352 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:10:18.253363 | orchestrator | 2025-06-22 20:10:18.253375 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-06-22 20:10:18.253396 | orchestrator | Sunday 22 June 2025 20:09:06 +0000 (0:00:03.504) 0:00:55.184 *********** 2025-06-22 20:10:18.253408 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-22 20:10:18.253434 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-22 20:10:18.253447 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-22 20:10:18.253458 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:10:18.253470 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:10:18.253487 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:10:18.253499 | orchestrator | 2025-06-22 20:10:18.253510 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-06-22 20:10:18.253526 | orchestrator | Sunday 22 June 2025 20:09:17 +0000 (0:00:11.688) 0:01:06.873 *********** 2025-06-22 20:10:18.253542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-22 20:10:18.253555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-22 20:10:18.253566 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:10:18.253578 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-22 20:10:18.253595 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-22 20:10:18.253606 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:10:18.253624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-06-22 20:10:18.253640 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-06-22 20:10:18.253652 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:10:18.253663 | orchestrator | 2025-06-22 20:10:18.253674 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-06-22 20:10:18.253685 | orchestrator | Sunday 22 June 2025 20:09:20 +0000 (0:00:02.106) 0:01:08.980 *********** 2025-06-22 20:10:18.253696 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-22 20:10:18.253708 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-22 20:10:18.253726 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-06-22 20:10:18.253747 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:10:18.253760 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:10:18.253771 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:10:18.253783 | orchestrator | 2025-06-22 20:10:18.253793 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-06-22 20:10:18.253811 | orchestrator | Sunday 22 June 2025 20:09:22 +0000 (0:00:02.759) 0:01:11.739 *********** 2025-06-22 20:10:18.253822 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:10:18.253835 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:10:18.253848 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:10:18.253860 | orchestrator | 2025-06-22 20:10:18.253872 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-06-22 20:10:18.253885 | orchestrator | Sunday 22 June 2025 20:09:23 +0000 (0:00:00.250) 0:01:11.990 *********** 2025-06-22 20:10:18.253896 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:10:18.253909 | orchestrator | 2025-06-22 20:10:18.253921 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-06-22 20:10:18.253934 | orchestrator | Sunday 22 June 2025 20:09:25 +0000 (0:00:02.107) 0:01:14.097 *********** 2025-06-22 20:10:18.253945 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:10:18.253956 | orchestrator | 2025-06-22 20:10:18.253967 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-06-22 20:10:18.253978 | orchestrator | Sunday 22 June 2025 20:09:27 +0000 (0:00:02.332) 0:01:16.430 *********** 2025-06-22 20:10:18.253988 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:10:18.253999 | orchestrator | 2025-06-22 20:10:18.254010 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-06-22 20:10:18.254127 | orchestrator | Sunday 22 June 2025 20:09:42 +0000 (0:00:15.421) 0:01:31.851 *********** 2025-06-22 20:10:18.254140 | orchestrator | 2025-06-22 20:10:18.254151 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-06-22 20:10:18.254161 | orchestrator | Sunday 22 June 2025 20:09:43 +0000 (0:00:00.076) 0:01:31.927 *********** 2025-06-22 20:10:18.254172 | orchestrator | 2025-06-22 20:10:18.254207 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-06-22 20:10:18.254219 | orchestrator | Sunday 22 June 2025 20:09:43 +0000 (0:00:00.059) 0:01:31.986 *********** 2025-06-22 20:10:18.254230 | orchestrator | 2025-06-22 20:10:18.254241 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-06-22 20:10:18.254252 | orchestrator | Sunday 22 June 2025 20:09:43 +0000 (0:00:00.060) 0:01:32.047 *********** 2025-06-22 20:10:18.254263 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:10:18.254273 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:10:18.254284 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:10:18.254295 | orchestrator | 2025-06-22 20:10:18.254306 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-06-22 20:10:18.254317 | orchestrator | Sunday 22 June 2025 20:10:00 +0000 (0:00:17.368) 0:01:49.415 *********** 2025-06-22 20:10:18.254328 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:10:18.254339 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:10:18.254349 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:10:18.254360 | orchestrator | 2025-06-22 20:10:18.254379 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 20:10:18.254392 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-06-22 20:10:18.254409 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-22 20:10:18.254421 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-06-22 20:10:18.254432 | orchestrator | 2025-06-22 20:10:18.254442 | orchestrator | 2025-06-22 20:10:18.254453 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 20:10:18.254464 | orchestrator | Sunday 22 June 2025 20:10:16 +0000 (0:00:16.309) 0:02:05.725 *********** 2025-06-22 20:10:18.254475 | orchestrator | =============================================================================== 2025-06-22 20:10:18.254494 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 17.37s 2025-06-22 20:10:18.254505 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 16.31s 2025-06-22 20:10:18.254516 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 15.42s 2025-06-22 20:10:18.254526 | orchestrator | magnum : Copying over magnum.conf -------------------------------------- 11.69s 2025-06-22 20:10:18.254537 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.34s 2025-06-22 20:10:18.254548 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 4.10s 2025-06-22 20:10:18.254559 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.88s 2025-06-22 20:10:18.254569 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.78s 2025-06-22 20:10:18.254580 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.55s 2025-06-22 20:10:18.254591 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.54s 2025-06-22 20:10:18.254602 | orchestrator | magnum : Copying over config.json files for services -------------------- 3.50s 2025-06-22 20:10:18.254612 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 3.50s 2025-06-22 20:10:18.254623 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 3.38s 2025-06-22 20:10:18.254634 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.36s 2025-06-22 20:10:18.254645 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.26s 2025-06-22 20:10:18.254655 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 3.15s 2025-06-22 20:10:18.254666 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.76s 2025-06-22 20:10:18.254677 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.33s 2025-06-22 20:10:18.254687 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.11s 2025-06-22 20:10:18.254698 | orchestrator | magnum : Copying over existing policy file ------------------------------ 2.11s 2025-06-22 20:10:18.254709 | orchestrator | 2025-06-22 20:10:18 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:10:18.254720 | orchestrator | 2025-06-22 20:10:18 | INFO  | Task b2c6a072-e0d1-4613-992c-f570f82e626a is in state STARTED 2025-06-22 20:10:18.254824 | orchestrator | 2025-06-22 20:10:18 | INFO  | Task 9ad8b6cf-4278-45ca-8d5b-2f3dea7e4b25 is in state STARTED 2025-06-22 20:10:18.255764 | orchestrator | 2025-06-22 20:10:18 | INFO  | Task 4136a3fe-1dd6-4ce5-bf47-592f07e29278 is in state STARTED 2025-06-22 20:10:18.255864 | orchestrator | 2025-06-22 20:10:18 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:10:21.288363 | orchestrator | 2025-06-22 20:10:21 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:10:21.290391 | orchestrator | 2025-06-22 20:10:21 | INFO  | Task b2c6a072-e0d1-4613-992c-f570f82e626a is in state STARTED 2025-06-22 20:10:21.292888 | orchestrator | 2025-06-22 20:10:21 | INFO  | Task 9ad8b6cf-4278-45ca-8d5b-2f3dea7e4b25 is in state STARTED 2025-06-22 20:10:21.293530 | orchestrator | 2025-06-22 20:10:21 | INFO  | Task 4136a3fe-1dd6-4ce5-bf47-592f07e29278 is in state STARTED 2025-06-22 20:10:21.294095 | orchestrator | 2025-06-22 20:10:21 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:10:24.339712 | orchestrator | 2025-06-22 20:10:24 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:10:24.341133 | orchestrator | 2025-06-22 20:10:24 | INFO  | Task b2c6a072-e0d1-4613-992c-f570f82e626a is in state STARTED 2025-06-22 20:10:24.341834 | orchestrator | 2025-06-22 20:10:24 | INFO  | Task 9ad8b6cf-4278-45ca-8d5b-2f3dea7e4b25 is in state STARTED 2025-06-22 20:10:24.343339 | orchestrator | 2025-06-22 20:10:24 | INFO  | Task 4136a3fe-1dd6-4ce5-bf47-592f07e29278 is in state STARTED 2025-06-22 20:10:24.343458 | orchestrator | 2025-06-22 20:10:24 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:10:27.380495 | orchestrator | 2025-06-22 20:10:27 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:10:27.380871 | orchestrator | 2025-06-22 20:10:27 | INFO  | Task b2c6a072-e0d1-4613-992c-f570f82e626a is in state STARTED 2025-06-22 20:10:27.380919 | orchestrator | 2025-06-22 20:10:27 | INFO  | Task 9ad8b6cf-4278-45ca-8d5b-2f3dea7e4b25 is in state STARTED 2025-06-22 20:10:27.383218 | orchestrator | 2025-06-22 20:10:27 | INFO  | Task 4136a3fe-1dd6-4ce5-bf47-592f07e29278 is in state STARTED 2025-06-22 20:10:27.383261 | orchestrator | 2025-06-22 20:10:27 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:10:30.413882 | orchestrator | 2025-06-22 20:10:30 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:10:30.413969 | orchestrator | 2025-06-22 20:10:30 | INFO  | Task b2c6a072-e0d1-4613-992c-f570f82e626a is in state STARTED 2025-06-22 20:10:30.413984 | orchestrator | 2025-06-22 20:10:30 | INFO  | Task 9ad8b6cf-4278-45ca-8d5b-2f3dea7e4b25 is in state STARTED 2025-06-22 20:10:30.417552 | orchestrator | 2025-06-22 20:10:30 | INFO  | Task 4136a3fe-1dd6-4ce5-bf47-592f07e29278 is in state STARTED 2025-06-22 20:10:30.417586 | orchestrator | 2025-06-22 20:10:30 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:10:33.440270 | orchestrator | 2025-06-22 20:10:33 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:10:33.440737 | orchestrator | 2025-06-22 20:10:33 | INFO  | Task b2c6a072-e0d1-4613-992c-f570f82e626a is in state STARTED 2025-06-22 20:10:33.441347 | orchestrator | 2025-06-22 20:10:33 | INFO  | Task 9ad8b6cf-4278-45ca-8d5b-2f3dea7e4b25 is in state STARTED 2025-06-22 20:10:33.442239 | orchestrator | 2025-06-22 20:10:33 | INFO  | Task 4136a3fe-1dd6-4ce5-bf47-592f07e29278 is in state STARTED 2025-06-22 20:10:33.442469 | orchestrator | 2025-06-22 20:10:33 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:10:36.486835 | orchestrator | 2025-06-22 20:10:36 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:10:36.491629 | orchestrator | 2025-06-22 20:10:36 | INFO  | Task b2c6a072-e0d1-4613-992c-f570f82e626a is in state STARTED 2025-06-22 20:10:36.497780 | orchestrator | 2025-06-22 20:10:36 | INFO  | Task 9ad8b6cf-4278-45ca-8d5b-2f3dea7e4b25 is in state STARTED 2025-06-22 20:10:36.504127 | orchestrator | 2025-06-22 20:10:36 | INFO  | Task 4136a3fe-1dd6-4ce5-bf47-592f07e29278 is in state STARTED 2025-06-22 20:10:36.504173 | orchestrator | 2025-06-22 20:10:36 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:10:39.552309 | orchestrator | 2025-06-22 20:10:39 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:10:39.552395 | orchestrator | 2025-06-22 20:10:39 | INFO  | Task b2c6a072-e0d1-4613-992c-f570f82e626a is in state STARTED 2025-06-22 20:10:39.556489 | orchestrator | 2025-06-22 20:10:39 | INFO  | Task 9ad8b6cf-4278-45ca-8d5b-2f3dea7e4b25 is in state STARTED 2025-06-22 20:10:39.559188 | orchestrator | 2025-06-22 20:10:39 | INFO  | Task 4136a3fe-1dd6-4ce5-bf47-592f07e29278 is in state STARTED 2025-06-22 20:10:39.559345 | orchestrator | 2025-06-22 20:10:39 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:10:42.603136 | orchestrator | 2025-06-22 20:10:42 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:10:42.604175 | orchestrator | 2025-06-22 20:10:42 | INFO  | Task b2c6a072-e0d1-4613-992c-f570f82e626a is in state STARTED 2025-06-22 20:10:42.604631 | orchestrator | 2025-06-22 20:10:42 | INFO  | Task 9ad8b6cf-4278-45ca-8d5b-2f3dea7e4b25 is in state STARTED 2025-06-22 20:10:42.605744 | orchestrator | 2025-06-22 20:10:42 | INFO  | Task 4136a3fe-1dd6-4ce5-bf47-592f07e29278 is in state STARTED 2025-06-22 20:10:42.605766 | orchestrator | 2025-06-22 20:10:42 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:10:45.637889 | orchestrator | 2025-06-22 20:10:45 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:10:45.638231 | orchestrator | 2025-06-22 20:10:45 | INFO  | Task b2c6a072-e0d1-4613-992c-f570f82e626a is in state STARTED 2025-06-22 20:10:45.638910 | orchestrator | 2025-06-22 20:10:45 | INFO  | Task 9ad8b6cf-4278-45ca-8d5b-2f3dea7e4b25 is in state STARTED 2025-06-22 20:10:45.639538 | orchestrator | 2025-06-22 20:10:45 | INFO  | Task 4136a3fe-1dd6-4ce5-bf47-592f07e29278 is in state STARTED 2025-06-22 20:10:45.639612 | orchestrator | 2025-06-22 20:10:45 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:10:48.670885 | orchestrator | 2025-06-22 20:10:48 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:10:48.671885 | orchestrator | 2025-06-22 20:10:48 | INFO  | Task b2c6a072-e0d1-4613-992c-f570f82e626a is in state STARTED 2025-06-22 20:10:48.672283 | orchestrator | 2025-06-22 20:10:48 | INFO  | Task 9ad8b6cf-4278-45ca-8d5b-2f3dea7e4b25 is in state STARTED 2025-06-22 20:10:48.672917 | orchestrator | 2025-06-22 20:10:48 | INFO  | Task 4136a3fe-1dd6-4ce5-bf47-592f07e29278 is in state STARTED 2025-06-22 20:10:48.672931 | orchestrator | 2025-06-22 20:10:48 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:10:51.707327 | orchestrator | 2025-06-22 20:10:51 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:10:51.707530 | orchestrator | 2025-06-22 20:10:51 | INFO  | Task b2c6a072-e0d1-4613-992c-f570f82e626a is in state STARTED 2025-06-22 20:10:51.708099 | orchestrator | 2025-06-22 20:10:51 | INFO  | Task 9ad8b6cf-4278-45ca-8d5b-2f3dea7e4b25 is in state STARTED 2025-06-22 20:10:51.708621 | orchestrator | 2025-06-22 20:10:51 | INFO  | Task 4136a3fe-1dd6-4ce5-bf47-592f07e29278 is in state STARTED 2025-06-22 20:10:51.708649 | orchestrator | 2025-06-22 20:10:51 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:10:54.730475 | orchestrator | 2025-06-22 20:10:54 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:10:54.730763 | orchestrator | 2025-06-22 20:10:54 | INFO  | Task b2c6a072-e0d1-4613-992c-f570f82e626a is in state STARTED 2025-06-22 20:10:54.731439 | orchestrator | 2025-06-22 20:10:54 | INFO  | Task 9ad8b6cf-4278-45ca-8d5b-2f3dea7e4b25 is in state STARTED 2025-06-22 20:10:54.731939 | orchestrator | 2025-06-22 20:10:54 | INFO  | Task 4136a3fe-1dd6-4ce5-bf47-592f07e29278 is in state STARTED 2025-06-22 20:10:54.731962 | orchestrator | 2025-06-22 20:10:54 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:10:57.767206 | orchestrator | 2025-06-22 20:10:57 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:10:57.767691 | orchestrator | 2025-06-22 20:10:57 | INFO  | Task b2c6a072-e0d1-4613-992c-f570f82e626a is in state STARTED 2025-06-22 20:10:57.768404 | orchestrator | 2025-06-22 20:10:57 | INFO  | Task 9ad8b6cf-4278-45ca-8d5b-2f3dea7e4b25 is in state STARTED 2025-06-22 20:10:57.769152 | orchestrator | 2025-06-22 20:10:57 | INFO  | Task 4136a3fe-1dd6-4ce5-bf47-592f07e29278 is in state STARTED 2025-06-22 20:10:57.770272 | orchestrator | 2025-06-22 20:10:57 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:11:00.802478 | orchestrator | 2025-06-22 20:11:00 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:11:00.802566 | orchestrator | 2025-06-22 20:11:00 | INFO  | Task b2c6a072-e0d1-4613-992c-f570f82e626a is in state STARTED 2025-06-22 20:11:00.805555 | orchestrator | 2025-06-22 20:11:00 | INFO  | Task 9ad8b6cf-4278-45ca-8d5b-2f3dea7e4b25 is in state STARTED 2025-06-22 20:11:00.806150 | orchestrator | 2025-06-22 20:11:00 | INFO  | Task 4136a3fe-1dd6-4ce5-bf47-592f07e29278 is in state STARTED 2025-06-22 20:11:00.806176 | orchestrator | 2025-06-22 20:11:00 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:11:03.838307 | orchestrator | 2025-06-22 20:11:03 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:11:03.838836 | orchestrator | 2025-06-22 20:11:03 | INFO  | Task b2c6a072-e0d1-4613-992c-f570f82e626a is in state STARTED 2025-06-22 20:11:03.844678 | orchestrator | 2025-06-22 20:11:03 | INFO  | Task 9ad8b6cf-4278-45ca-8d5b-2f3dea7e4b25 is in state STARTED 2025-06-22 20:11:03.845146 | orchestrator | 2025-06-22 20:11:03 | INFO  | Task 4136a3fe-1dd6-4ce5-bf47-592f07e29278 is in state STARTED 2025-06-22 20:11:03.846124 | orchestrator | 2025-06-22 20:11:03 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:11:06.866737 | orchestrator | 2025-06-22 20:11:06 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:11:06.867748 | orchestrator | 2025-06-22 20:11:06 | INFO  | Task b2c6a072-e0d1-4613-992c-f570f82e626a is in state STARTED 2025-06-22 20:11:06.868348 | orchestrator | 2025-06-22 20:11:06 | INFO  | Task 9ad8b6cf-4278-45ca-8d5b-2f3dea7e4b25 is in state STARTED 2025-06-22 20:11:06.870648 | orchestrator | 2025-06-22 20:11:06 | INFO  | Task 4136a3fe-1dd6-4ce5-bf47-592f07e29278 is in state STARTED 2025-06-22 20:11:06.870720 | orchestrator | 2025-06-22 20:11:06 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:11:09.917493 | orchestrator | 2025-06-22 20:11:09 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:11:09.918154 | orchestrator | 2025-06-22 20:11:09 | INFO  | Task b2c6a072-e0d1-4613-992c-f570f82e626a is in state STARTED 2025-06-22 20:11:09.919141 | orchestrator | 2025-06-22 20:11:09 | INFO  | Task 9ad8b6cf-4278-45ca-8d5b-2f3dea7e4b25 is in state STARTED 2025-06-22 20:11:09.921772 | orchestrator | 2025-06-22 20:11:09 | INFO  | Task 4136a3fe-1dd6-4ce5-bf47-592f07e29278 is in state STARTED 2025-06-22 20:11:09.922004 | orchestrator | 2025-06-22 20:11:09 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:11:12.948494 | orchestrator | 2025-06-22 20:11:12 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:11:12.948772 | orchestrator | 2025-06-22 20:11:12 | INFO  | Task b2c6a072-e0d1-4613-992c-f570f82e626a is in state STARTED 2025-06-22 20:11:12.949488 | orchestrator | 2025-06-22 20:11:12 | INFO  | Task 9ad8b6cf-4278-45ca-8d5b-2f3dea7e4b25 is in state STARTED 2025-06-22 20:11:12.950176 | orchestrator | 2025-06-22 20:11:12 | INFO  | Task 4136a3fe-1dd6-4ce5-bf47-592f07e29278 is in state STARTED 2025-06-22 20:11:12.950199 | orchestrator | 2025-06-22 20:11:12 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:11:15.993188 | orchestrator | 2025-06-22 20:11:15 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:11:15.994891 | orchestrator | 2025-06-22 20:11:15 | INFO  | Task b2c6a072-e0d1-4613-992c-f570f82e626a is in state STARTED 2025-06-22 20:11:15.997352 | orchestrator | 2025-06-22 20:11:15 | INFO  | Task 9ad8b6cf-4278-45ca-8d5b-2f3dea7e4b25 is in state STARTED 2025-06-22 20:11:16.001671 | orchestrator | 2025-06-22 20:11:15 | INFO  | Task 4136a3fe-1dd6-4ce5-bf47-592f07e29278 is in state STARTED 2025-06-22 20:11:16.001783 | orchestrator | 2025-06-22 20:11:15 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:11:19.051537 | orchestrator | 2025-06-22 20:11:19 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:11:19.053766 | orchestrator | 2025-06-22 20:11:19 | INFO  | Task b2c6a072-e0d1-4613-992c-f570f82e626a is in state STARTED 2025-06-22 20:11:19.056396 | orchestrator | 2025-06-22 20:11:19 | INFO  | Task 9ad8b6cf-4278-45ca-8d5b-2f3dea7e4b25 is in state STARTED 2025-06-22 20:11:19.058247 | orchestrator | 2025-06-22 20:11:19 | INFO  | Task 4136a3fe-1dd6-4ce5-bf47-592f07e29278 is in state STARTED 2025-06-22 20:11:19.058288 | orchestrator | 2025-06-22 20:11:19 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:11:22.102206 | orchestrator | 2025-06-22 20:11:22 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:11:22.105778 | orchestrator | 2025-06-22 20:11:22 | INFO  | Task b2c6a072-e0d1-4613-992c-f570f82e626a is in state STARTED 2025-06-22 20:11:22.107538 | orchestrator | 2025-06-22 20:11:22 | INFO  | Task 9ad8b6cf-4278-45ca-8d5b-2f3dea7e4b25 is in state STARTED 2025-06-22 20:11:22.110266 | orchestrator | 2025-06-22 20:11:22 | INFO  | Task 4136a3fe-1dd6-4ce5-bf47-592f07e29278 is in state STARTED 2025-06-22 20:11:22.110341 | orchestrator | 2025-06-22 20:11:22 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:11:25.166385 | orchestrator | 2025-06-22 20:11:25 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:11:25.167897 | orchestrator | 2025-06-22 20:11:25 | INFO  | Task b2c6a072-e0d1-4613-992c-f570f82e626a is in state STARTED 2025-06-22 20:11:25.170254 | orchestrator | 2025-06-22 20:11:25 | INFO  | Task 9ad8b6cf-4278-45ca-8d5b-2f3dea7e4b25 is in state STARTED 2025-06-22 20:11:25.172830 | orchestrator | 2025-06-22 20:11:25 | INFO  | Task 4136a3fe-1dd6-4ce5-bf47-592f07e29278 is in state STARTED 2025-06-22 20:11:25.174129 | orchestrator | 2025-06-22 20:11:25 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:11:28.224459 | orchestrator | 2025-06-22 20:11:28 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:11:28.224564 | orchestrator | 2025-06-22 20:11:28 | INFO  | Task b2c6a072-e0d1-4613-992c-f570f82e626a is in state STARTED 2025-06-22 20:11:28.224579 | orchestrator | 2025-06-22 20:11:28 | INFO  | Task 9ad8b6cf-4278-45ca-8d5b-2f3dea7e4b25 is in state STARTED 2025-06-22 20:11:28.226398 | orchestrator | 2025-06-22 20:11:28 | INFO  | Task 4136a3fe-1dd6-4ce5-bf47-592f07e29278 is in state STARTED 2025-06-22 20:11:28.226448 | orchestrator | 2025-06-22 20:11:28 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:11:31.272469 | orchestrator | 2025-06-22 20:11:31 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:11:31.273748 | orchestrator | 2025-06-22 20:11:31 | INFO  | Task b2c6a072-e0d1-4613-992c-f570f82e626a is in state STARTED 2025-06-22 20:11:31.275690 | orchestrator | 2025-06-22 20:11:31 | INFO  | Task 9ad8b6cf-4278-45ca-8d5b-2f3dea7e4b25 is in state STARTED 2025-06-22 20:11:31.277131 | orchestrator | 2025-06-22 20:11:31 | INFO  | Task 4136a3fe-1dd6-4ce5-bf47-592f07e29278 is in state STARTED 2025-06-22 20:11:31.277196 | orchestrator | 2025-06-22 20:11:31 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:11:34.318990 | orchestrator | 2025-06-22 20:11:34 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:11:34.320496 | orchestrator | 2025-06-22 20:11:34 | INFO  | Task b2c6a072-e0d1-4613-992c-f570f82e626a is in state STARTED 2025-06-22 20:11:34.321978 | orchestrator | 2025-06-22 20:11:34 | INFO  | Task 9ad8b6cf-4278-45ca-8d5b-2f3dea7e4b25 is in state STARTED 2025-06-22 20:11:34.323414 | orchestrator | 2025-06-22 20:11:34 | INFO  | Task 4136a3fe-1dd6-4ce5-bf47-592f07e29278 is in state STARTED 2025-06-22 20:11:34.323441 | orchestrator | 2025-06-22 20:11:34 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:11:37.377260 | orchestrator | 2025-06-22 20:11:37 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:11:37.378936 | orchestrator | 2025-06-22 20:11:37 | INFO  | Task b2c6a072-e0d1-4613-992c-f570f82e626a is in state STARTED 2025-06-22 20:11:37.381826 | orchestrator | 2025-06-22 20:11:37 | INFO  | Task 9ad8b6cf-4278-45ca-8d5b-2f3dea7e4b25 is in state STARTED 2025-06-22 20:11:37.384287 | orchestrator | 2025-06-22 20:11:37 | INFO  | Task 4136a3fe-1dd6-4ce5-bf47-592f07e29278 is in state STARTED 2025-06-22 20:11:37.384347 | orchestrator | 2025-06-22 20:11:37 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:11:40.432584 | orchestrator | 2025-06-22 20:11:40 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:11:40.432659 | orchestrator | 2025-06-22 20:11:40 | INFO  | Task b2c6a072-e0d1-4613-992c-f570f82e626a is in state STARTED 2025-06-22 20:11:40.432670 | orchestrator | 2025-06-22 20:11:40 | INFO  | Task 9ad8b6cf-4278-45ca-8d5b-2f3dea7e4b25 is in state STARTED 2025-06-22 20:11:40.432680 | orchestrator | 2025-06-22 20:11:40 | INFO  | Task 4136a3fe-1dd6-4ce5-bf47-592f07e29278 is in state STARTED 2025-06-22 20:11:40.432741 | orchestrator | 2025-06-22 20:11:40 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:11:43.475949 | orchestrator | 2025-06-22 20:11:43 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:11:43.476490 | orchestrator | 2025-06-22 20:11:43 | INFO  | Task b2c6a072-e0d1-4613-992c-f570f82e626a is in state STARTED 2025-06-22 20:11:43.477463 | orchestrator | 2025-06-22 20:11:43 | INFO  | Task 9ad8b6cf-4278-45ca-8d5b-2f3dea7e4b25 is in state STARTED 2025-06-22 20:11:43.478737 | orchestrator | 2025-06-22 20:11:43 | INFO  | Task 4136a3fe-1dd6-4ce5-bf47-592f07e29278 is in state STARTED 2025-06-22 20:11:43.478794 | orchestrator | 2025-06-22 20:11:43 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:11:46.528711 | orchestrator | 2025-06-22 20:11:46 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:11:46.529810 | orchestrator | 2025-06-22 20:11:46 | INFO  | Task b2c6a072-e0d1-4613-992c-f570f82e626a is in state STARTED 2025-06-22 20:11:46.531446 | orchestrator | 2025-06-22 20:11:46 | INFO  | Task 9ad8b6cf-4278-45ca-8d5b-2f3dea7e4b25 is in state STARTED 2025-06-22 20:11:46.533000 | orchestrator | 2025-06-22 20:11:46 | INFO  | Task 4136a3fe-1dd6-4ce5-bf47-592f07e29278 is in state STARTED 2025-06-22 20:11:46.533359 | orchestrator | 2025-06-22 20:11:46 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:11:49.580668 | orchestrator | 2025-06-22 20:11:49 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:11:49.582172 | orchestrator | 2025-06-22 20:11:49 | INFO  | Task b2c6a072-e0d1-4613-992c-f570f82e626a is in state STARTED 2025-06-22 20:11:49.583762 | orchestrator | 2025-06-22 20:11:49 | INFO  | Task 9ad8b6cf-4278-45ca-8d5b-2f3dea7e4b25 is in state STARTED 2025-06-22 20:11:49.585382 | orchestrator | 2025-06-22 20:11:49 | INFO  | Task 4136a3fe-1dd6-4ce5-bf47-592f07e29278 is in state STARTED 2025-06-22 20:11:49.585433 | orchestrator | 2025-06-22 20:11:49 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:11:52.628644 | orchestrator | 2025-06-22 20:11:52 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:11:52.631230 | orchestrator | 2025-06-22 20:11:52 | INFO  | Task b2c6a072-e0d1-4613-992c-f570f82e626a is in state STARTED 2025-06-22 20:11:52.631288 | orchestrator | 2025-06-22 20:11:52 | INFO  | Task 9ad8b6cf-4278-45ca-8d5b-2f3dea7e4b25 is in state STARTED 2025-06-22 20:11:52.632877 | orchestrator | 2025-06-22 20:11:52 | INFO  | Task 4136a3fe-1dd6-4ce5-bf47-592f07e29278 is in state STARTED 2025-06-22 20:11:52.632956 | orchestrator | 2025-06-22 20:11:52 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:11:55.677090 | orchestrator | 2025-06-22 20:11:55 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:11:55.678097 | orchestrator | 2025-06-22 20:11:55 | INFO  | Task b2c6a072-e0d1-4613-992c-f570f82e626a is in state STARTED 2025-06-22 20:11:55.679164 | orchestrator | 2025-06-22 20:11:55 | INFO  | Task 9ad8b6cf-4278-45ca-8d5b-2f3dea7e4b25 is in state STARTED 2025-06-22 20:11:55.680877 | orchestrator | 2025-06-22 20:11:55 | INFO  | Task 4136a3fe-1dd6-4ce5-bf47-592f07e29278 is in state STARTED 2025-06-22 20:11:55.680928 | orchestrator | 2025-06-22 20:11:55 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:11:58.740936 | orchestrator | 2025-06-22 20:11:58 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:11:58.742921 | orchestrator | 2025-06-22 20:11:58 | INFO  | Task b2c6a072-e0d1-4613-992c-f570f82e626a is in state STARTED 2025-06-22 20:11:58.744461 | orchestrator | 2025-06-22 20:11:58 | INFO  | Task 9ad8b6cf-4278-45ca-8d5b-2f3dea7e4b25 is in state STARTED 2025-06-22 20:11:58.746161 | orchestrator | 2025-06-22 20:11:58 | INFO  | Task 4136a3fe-1dd6-4ce5-bf47-592f07e29278 is in state STARTED 2025-06-22 20:11:58.746262 | orchestrator | 2025-06-22 20:11:58 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:12:01.775167 | orchestrator | 2025-06-22 20:12:01 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:12:01.775385 | orchestrator | 2025-06-22 20:12:01 | INFO  | Task b2c6a072-e0d1-4613-992c-f570f82e626a is in state STARTED 2025-06-22 20:12:01.776485 | orchestrator | 2025-06-22 20:12:01 | INFO  | Task 9ad8b6cf-4278-45ca-8d5b-2f3dea7e4b25 is in state STARTED 2025-06-22 20:12:01.777885 | orchestrator | 2025-06-22 20:12:01 | INFO  | Task 4136a3fe-1dd6-4ce5-bf47-592f07e29278 is in state STARTED 2025-06-22 20:12:01.777945 | orchestrator | 2025-06-22 20:12:01 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:12:04.809569 | orchestrator | 2025-06-22 20:12:04 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:12:04.814474 | orchestrator | 2025-06-22 20:12:04 | INFO  | Task b2c6a072-e0d1-4613-992c-f570f82e626a is in state STARTED 2025-06-22 20:12:04.815717 | orchestrator | 2025-06-22 20:12:04.815759 | orchestrator | 2025-06-22 20:12:04.815773 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 20:12:04.815786 | orchestrator | 2025-06-22 20:12:04.815798 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 20:12:04.815810 | orchestrator | Sunday 22 June 2025 20:08:36 +0000 (0:00:00.248) 0:00:00.248 *********** 2025-06-22 20:12:04.815823 | orchestrator | ok: [testbed-manager] 2025-06-22 20:12:04.815836 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:12:04.815848 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:12:04.815860 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:12:04.815872 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:12:04.815884 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:12:04.815895 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:12:04.815907 | orchestrator | 2025-06-22 20:12:04.815919 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 20:12:04.815955 | orchestrator | Sunday 22 June 2025 20:08:37 +0000 (0:00:00.709) 0:00:00.958 *********** 2025-06-22 20:12:04.815969 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-06-22 20:12:04.815980 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-06-22 20:12:04.816052 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-06-22 20:12:04.816068 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-06-22 20:12:04.816079 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-06-22 20:12:04.816090 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-06-22 20:12:04.816171 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-06-22 20:12:04.816185 | orchestrator | 2025-06-22 20:12:04.816196 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-06-22 20:12:04.816207 | orchestrator | 2025-06-22 20:12:04.816218 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-06-22 20:12:04.816229 | orchestrator | Sunday 22 June 2025 20:08:37 +0000 (0:00:00.634) 0:00:01.593 *********** 2025-06-22 20:12:04.816242 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 20:12:04.816255 | orchestrator | 2025-06-22 20:12:04.816266 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-06-22 20:12:04.816277 | orchestrator | Sunday 22 June 2025 20:08:39 +0000 (0:00:01.292) 0:00:02.885 *********** 2025-06-22 20:12:04.816305 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 20:12:04.816323 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-22 20:12:04.816338 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 20:12:04.816352 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 20:12:04.816393 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 20:12:04.816408 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:12:04.816423 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:12:04.816436 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 20:12:04.816455 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 20:12:04.816469 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:12:04.816482 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 20:12:04.816501 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 20:12:04.816640 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:12:04.816654 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:12:04.816665 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 20:12:04.816685 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-22 20:12:04.816699 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 20:12:04.816711 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:12:04.816739 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 20:12:04.816752 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 20:12:04.816763 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 20:12:04.816775 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-22 20:12:04.816791 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:12:04.816803 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-22 20:12:04.816815 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-22 20:12:04.816826 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 20:12:04.816852 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:12:04.816864 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:12:04.816876 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:12:04.816887 | orchestrator | 2025-06-22 20:12:04.816899 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-06-22 20:12:04.816911 | orchestrator | Sunday 22 June 2025 20:08:41 +0000 (0:00:02.402) 0:00:05.288 *********** 2025-06-22 20:12:04.816922 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 20:12:04.816933 | orchestrator | 2025-06-22 20:12:04.816945 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-06-22 20:12:04.816969 | orchestrator | Sunday 22 June 2025 20:08:42 +0000 (0:00:01.193) 0:00:06.481 *********** 2025-06-22 20:12:04.816986 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-22 20:12:04.817029 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 20:12:04.817093 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 20:12:04.817117 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 20:12:04.817129 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 20:12:04.817141 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 20:12:04.817152 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 20:12:04.817170 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 20:12:04.817182 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:12:04.817219 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:12:04.817231 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:12:04.817250 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 20:12:04.817273 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 20:12:04.817285 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 20:12:04.817297 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 20:12:04.817313 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:12:04.817357 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:12:04.817382 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:12:04.817402 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-22 20:12:04.817415 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-22 20:12:04.817427 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-22 20:12:04.817438 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-22 20:12:04.817456 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:12:04.817475 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 20:12:04.817486 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 20:12:04.818245 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 20:12:04.818277 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:12:04.818289 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:12:04.818301 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:12:04.818313 | orchestrator | 2025-06-22 20:12:04.818325 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-06-22 20:12:04.818337 | orchestrator | Sunday 22 June 2025 20:08:48 +0000 (0:00:05.523) 0:00:12.005 *********** 2025-06-22 20:12:04.818356 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-06-22 20:12:04.818380 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-22 20:12:04.818393 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-22 20:12:04.818416 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-06-22 20:12:04.818489 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 20:12:04.818522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-22 20:12:04.818550 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 20:12:04.818570 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 20:12:04.818582 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-22 20:12:04.818594 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 20:12:04.818606 | orchestrator | skipping: [testbed-manager] 2025-06-22 20:12:04.818626 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-22 20:12:04.818638 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 20:12:04.818649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 20:12:04.818661 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-22 20:12:04.818684 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-22 20:12:04.818696 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 20:12:04.818708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 20:12:04.818720 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 20:12:04.818738 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-22 20:12:04.818750 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 20:12:04.818925 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:12:04.818940 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:12:04.818953 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:12:04.818967 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-22 20:12:04.819059 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-22 20:12:04.819076 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-22 20:12:04.819088 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:12:04.819099 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-22 20:12:04.819111 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-22 20:12:04.819131 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-22 20:12:04.819143 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:12:04.819154 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-22 20:12:04.819165 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-22 20:12:04.819184 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-22 20:12:04.819196 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:12:04.819207 | orchestrator | 2025-06-22 20:12:04.819218 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-06-22 20:12:04.819234 | orchestrator | Sunday 22 June 2025 20:08:51 +0000 (0:00:02.670) 0:00:14.675 *********** 2025-06-22 20:12:04.819246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-22 20:12:04.819258 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 20:12:04.819269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 20:12:04.819287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-22 20:12:04.819299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 20:12:04.819311 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-06-22 20:12:04.819328 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-22 20:12:04.819345 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-22 20:12:04.819358 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-06-22 20:12:04.819377 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 20:12:04.819389 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:12:04.819400 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-22 20:12:04.819412 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 20:12:04.819434 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 20:12:04.819451 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-22 20:12:04.819463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 20:12:04.819474 | orchestrator | skipping: [testbed-manager] 2025-06-22 20:12:04.819485 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-22 20:12:04.819497 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 20:12:04.819514 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 20:12:04.819526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-22 20:12:04.819544 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-06-22 20:12:04.819555 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:12:04.819566 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:12:04.819577 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-22 20:12:04.819593 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-22 20:12:04.819603 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-22 20:12:04.819613 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:12:04.819623 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-22 20:12:04.819638 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-22 20:12:04.819649 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-22 20:12:04.819664 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:12:04.819674 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-06-22 20:12:04.819684 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-06-22 20:12:04.819699 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-06-22 20:12:04.819709 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:12:04.819719 | orchestrator | 2025-06-22 20:12:04.819729 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-06-22 20:12:04.819739 | orchestrator | Sunday 22 June 2025 20:08:54 +0000 (0:00:03.115) 0:00:17.791 *********** 2025-06-22 20:12:04.819749 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-22 20:12:04.819760 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 20:12:04.819775 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 20:12:04.819791 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 20:12:04.819802 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 20:12:04.819812 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 20:12:04.819826 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 20:12:04.819837 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:12:04.819847 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 20:12:04.819857 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:12:04.819878 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:12:04.819889 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 20:12:04.819900 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 20:12:04.819910 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 20:12:04.819924 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 20:12:04.819935 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-22 20:12:04.819952 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:12:04.819969 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-22 20:12:04.819979 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:12:04.819989 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:12:04.820023 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-22 20:12:04.820038 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 20:12:04.820048 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-22 20:12:04.820059 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:12:04.820081 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 20:12:04.820091 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 20:12:04.820101 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:12:04.820112 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:12:04.820127 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:12:04.820137 | orchestrator | 2025-06-22 20:12:04.820147 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-06-22 20:12:04.820157 | orchestrator | Sunday 22 June 2025 20:09:01 +0000 (0:00:07.565) 0:00:25.356 *********** 2025-06-22 20:12:04.820167 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-22 20:12:04.820177 | orchestrator | 2025-06-22 20:12:04.820187 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-06-22 20:12:04.820197 | orchestrator | Sunday 22 June 2025 20:09:02 +0000 (0:00:00.989) 0:00:26.345 *********** 2025-06-22 20:12:04.820207 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088892, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.354946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.820223 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088892, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.354946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.820240 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088892, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.354946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.820250 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088892, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.354946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 20:12:04.820261 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1088869, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3519459, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.820271 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1088869, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3519459, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.820288 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088892, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.354946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.820299 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088892, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.354946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.820315 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1088838, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3469458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.820330 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1088869, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3519459, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.820340 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1088838, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3469458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.820350 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088892, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.354946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.820360 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1088869, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3519459, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.820375 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1088869, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3519459, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.820385 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1088838, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3469458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.820404 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1088839, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3469458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.820421 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1088839, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3469458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.820432 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1088869, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3519459, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 20:12:04.820442 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1088838, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3469458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.820452 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1088869, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3519459, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.820467 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1088839, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3469458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.820483 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1088851, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.350946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.820493 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1088838, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3469458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.820946 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1088851, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.350946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr'2025-06-22 20:12:04 | INFO  | Task 9ad8b6cf-4278-45ca-8d5b-2f3dea7e4b25 is in state SUCCESS 2025-06-22 20:12:04.820966 | orchestrator | : False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.820977 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1088839, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3469458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.820988 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1088851, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.350946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.821028 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1088838, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3469458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.821053 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1088841, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3489459, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.821077 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1088838, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3469458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 20:12:04.821088 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1088851, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.350946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.821105 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1088839, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3469458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.821115 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1088849, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3499458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.821126 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1088841, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3489459, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.821136 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1088841, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3489459, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.821157 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1088839, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3469458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.821180 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1088851, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.350946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.821191 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1088841, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3489459, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.821207 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1088851, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.350946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.821217 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1088870, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.352946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.821236 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1088849, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3499458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.821247 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1088839, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3469458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 20:12:04.821272 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1088849, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3499458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.821283 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1088849, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3499458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.821293 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1088841, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3489459, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.821310 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1088841, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3489459, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.821320 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1088889, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.354946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.821331 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1088849, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3499458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.821341 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1088870, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.352946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.821362 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1088870, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.352946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.821372 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1088849, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3499458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.821383 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1088870, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.352946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.821397 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1088917, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.356946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.821432 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1088851, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.350946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 20:12:04.821443 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1088889, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.354946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.821453 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1088889, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.354946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.821478 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1088889, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.354946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.821488 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1088870, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.352946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.821498 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1088870, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.352946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.821514 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1088917, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.356946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.821524 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1088879, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.353946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.821535 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1088917, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.356946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.821553 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1088917, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.356946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.821569 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088840, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.347946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.821581 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1088889, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.354946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.821593 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1088889, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.354946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.821609 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1088841, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3489459, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 20:12:04.821620 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1088879, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.353946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.821632 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1088879, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.353946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.821649 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1088879, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.353946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.821665 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1088847, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3499458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.821677 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088840, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.347946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.821688 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1088917, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.356946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.821700 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088840, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.347946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.821717 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1088917, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.356946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.821729 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1088847, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3499458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.821747 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088840, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.347946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.821763 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088837, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3469458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.821774 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1088847, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3499458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.821786 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1088879, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.353946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.821797 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088837, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3469458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.821814 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1088849, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3499458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 20:12:04.821825 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1088847, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3499458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.821842 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1088879, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.353946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.821858 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1088856, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.350946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.821870 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088840, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.347946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.821882 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1088856, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.350946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.821894 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088837, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3469458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.821910 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088837, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3469458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.821927 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088840, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.347946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.821937 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1088916, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.356946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.821951 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1088847, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3499458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.821962 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1088916, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.356946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.821972 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1088870, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.352946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 20:12:04.821982 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1088856, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.350946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.822100 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1088856, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.350946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.822124 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088837, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3469458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.822135 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1088845, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3499458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.822150 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1088847, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3499458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.822161 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1088845, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3499458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.822172 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1088916, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.356946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.822182 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088837, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3469458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.822198 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1088895, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.355946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.822215 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:12:04.822226 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1088856, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.350946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.822236 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1088916, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.356946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.822250 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1088895, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.355946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.822261 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:12:04.822271 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1088889, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.354946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 20:12:04.822281 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1088845, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3499458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.822291 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1088856, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.350946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.822316 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1088845, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3499458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.822327 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1088916, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.356946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.822337 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1088895, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.355946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.822348 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:12:04.822362 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1088895, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.355946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.822372 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:12:04.822383 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1088916, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.356946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.822393 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1088845, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3499458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.822403 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1088845, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3499458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.822425 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1088895, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.355946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.822435 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:12:04.822445 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1088895, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.355946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-06-22 20:12:04.822455 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:12:04.822465 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1088917, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.356946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 20:12:04.822480 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1088879, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.353946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 20:12:04.822490 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088840, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.347946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 20:12:04.822500 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1088847, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3499458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 20:12:04.822521 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1088837, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3469458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 20:12:04.822532 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1088856, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.350946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 20:12:04.822542 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1088916, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.356946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 20:12:04.822552 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1088845, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3499458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 20:12:04.822566 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1088895, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.355946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-06-22 20:12:04.822574 | orchestrator | 2025-06-22 20:12:04.822583 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-06-22 20:12:04.822591 | orchestrator | Sunday 22 June 2025 20:09:30 +0000 (0:00:27.759) 0:00:54.105 *********** 2025-06-22 20:12:04.822600 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-22 20:12:04.822608 | orchestrator | 2025-06-22 20:12:04.822616 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-06-22 20:12:04.822624 | orchestrator | Sunday 22 June 2025 20:09:31 +0000 (0:00:00.650) 0:00:54.756 *********** 2025-06-22 20:12:04.822632 | orchestrator | [WARNING]: Skipped 2025-06-22 20:12:04.822640 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-22 20:12:04.822648 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-06-22 20:12:04.822661 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-22 20:12:04.822669 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-06-22 20:12:04.822677 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-22 20:12:04.822685 | orchestrator | [WARNING]: Skipped 2025-06-22 20:12:04.822693 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-22 20:12:04.822701 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-06-22 20:12:04.822709 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-22 20:12:04.822717 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-06-22 20:12:04.822725 | orchestrator | [WARNING]: Skipped 2025-06-22 20:12:04.822733 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-22 20:12:04.822741 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-06-22 20:12:04.822749 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-22 20:12:04.822757 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-06-22 20:12:04.822765 | orchestrator | [WARNING]: Skipped 2025-06-22 20:12:04.822773 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-22 20:12:04.822781 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-06-22 20:12:04.822789 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-22 20:12:04.822797 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-06-22 20:12:04.822805 | orchestrator | [WARNING]: Skipped 2025-06-22 20:12:04.822818 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-22 20:12:04.822826 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-06-22 20:12:04.822834 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-22 20:12:04.822842 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-06-22 20:12:04.822850 | orchestrator | [WARNING]: Skipped 2025-06-22 20:12:04.822858 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-22 20:12:04.822866 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-06-22 20:12:04.822874 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-22 20:12:04.822882 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-06-22 20:12:04.822890 | orchestrator | [WARNING]: Skipped 2025-06-22 20:12:04.822898 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-22 20:12:04.822905 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-06-22 20:12:04.822913 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-06-22 20:12:04.822921 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-06-22 20:12:04.822929 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-22 20:12:04.822937 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-06-22 20:12:04.822945 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-22 20:12:04.822953 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-06-22 20:12:04.822961 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-06-22 20:12:04.822968 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-06-22 20:12:04.822976 | orchestrator | 2025-06-22 20:12:04.822984 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-06-22 20:12:04.823013 | orchestrator | Sunday 22 June 2025 20:09:32 +0000 (0:00:01.782) 0:00:56.538 *********** 2025-06-22 20:12:04.823022 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-22 20:12:04.823031 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:12:04.823039 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-22 20:12:04.823053 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-22 20:12:04.823061 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:12:04.823069 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:12:04.823077 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-22 20:12:04.823085 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:12:04.823093 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-22 20:12:04.823101 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:12:04.823113 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-06-22 20:12:04.823121 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:12:04.823129 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-06-22 20:12:04.823137 | orchestrator | 2025-06-22 20:12:04.823145 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-06-22 20:12:04.823154 | orchestrator | Sunday 22 June 2025 20:09:46 +0000 (0:00:14.064) 0:01:10.603 *********** 2025-06-22 20:12:04.823162 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-22 20:12:04.823170 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:12:04.823178 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-22 20:12:04.823186 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:12:04.823194 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-22 20:12:04.823201 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:12:04.823209 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-22 20:12:04.823217 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:12:04.823225 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-22 20:12:04.823233 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:12:04.823241 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-06-22 20:12:04.823249 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:12:04.823257 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-06-22 20:12:04.823265 | orchestrator | 2025-06-22 20:12:04.823273 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-06-22 20:12:04.823281 | orchestrator | Sunday 22 June 2025 20:09:49 +0000 (0:00:02.949) 0:01:13.553 *********** 2025-06-22 20:12:04.823289 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-22 20:12:04.823297 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:12:04.823305 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-22 20:12:04.823313 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:12:04.823326 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-22 20:12:04.823335 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:12:04.823343 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-22 20:12:04.823351 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:12:04.823359 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-06-22 20:12:04.823367 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-22 20:12:04.823380 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:12:04.823389 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-06-22 20:12:04.823397 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:12:04.823405 | orchestrator | 2025-06-22 20:12:04.823413 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-06-22 20:12:04.823421 | orchestrator | Sunday 22 June 2025 20:09:51 +0000 (0:00:01.341) 0:01:14.894 *********** 2025-06-22 20:12:04.823429 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-22 20:12:04.823437 | orchestrator | 2025-06-22 20:12:04.823445 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-06-22 20:12:04.823453 | orchestrator | Sunday 22 June 2025 20:09:51 +0000 (0:00:00.673) 0:01:15.568 *********** 2025-06-22 20:12:04.823461 | orchestrator | skipping: [testbed-manager] 2025-06-22 20:12:04.823469 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:12:04.823477 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:12:04.823485 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:12:04.823493 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:12:04.823501 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:12:04.823508 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:12:04.823516 | orchestrator | 2025-06-22 20:12:04.823524 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-06-22 20:12:04.823532 | orchestrator | Sunday 22 June 2025 20:09:52 +0000 (0:00:00.684) 0:01:16.252 *********** 2025-06-22 20:12:04.823540 | orchestrator | skipping: [testbed-manager] 2025-06-22 20:12:04.823548 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:12:04.823556 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:12:04.823564 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:12:04.823572 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:12:04.823580 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:12:04.823588 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:12:04.823596 | orchestrator | 2025-06-22 20:12:04.823604 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-06-22 20:12:04.823613 | orchestrator | Sunday 22 June 2025 20:09:54 +0000 (0:00:01.765) 0:01:18.018 *********** 2025-06-22 20:12:04.823621 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-22 20:12:04.823632 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-22 20:12:04.823640 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-22 20:12:04.823648 | orchestrator | skipping: [testbed-manager] 2025-06-22 20:12:04.823656 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:12:04.823664 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:12:04.823673 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-22 20:12:04.823680 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:12:04.823688 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-22 20:12:04.823697 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:12:04.823704 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-22 20:12:04.823713 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:12:04.823721 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-06-22 20:12:04.823729 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:12:04.823737 | orchestrator | 2025-06-22 20:12:04.823745 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-06-22 20:12:04.823753 | orchestrator | Sunday 22 June 2025 20:09:55 +0000 (0:00:01.239) 0:01:19.257 *********** 2025-06-22 20:12:04.823761 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-22 20:12:04.823774 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:12:04.823782 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-22 20:12:04.823790 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:12:04.823798 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-22 20:12:04.823806 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:12:04.823814 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-22 20:12:04.823822 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:12:04.823830 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-22 20:12:04.823838 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:12:04.823846 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-06-22 20:12:04.823859 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-06-22 20:12:04.823867 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:12:04.823875 | orchestrator | 2025-06-22 20:12:04.823883 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-06-22 20:12:04.823891 | orchestrator | Sunday 22 June 2025 20:09:56 +0000 (0:00:01.218) 0:01:20.476 *********** 2025-06-22 20:12:04.823899 | orchestrator | [WARNING]: Skipped 2025-06-22 20:12:04.823907 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-06-22 20:12:04.823915 | orchestrator | due to this access issue: 2025-06-22 20:12:04.823923 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-06-22 20:12:04.823931 | orchestrator | not a directory 2025-06-22 20:12:04.823939 | orchestrator | ok: [testbed-manager -> localhost] 2025-06-22 20:12:04.823947 | orchestrator | 2025-06-22 20:12:04.823955 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-06-22 20:12:04.823963 | orchestrator | Sunday 22 June 2025 20:09:57 +0000 (0:00:01.143) 0:01:21.619 *********** 2025-06-22 20:12:04.823971 | orchestrator | skipping: [testbed-manager] 2025-06-22 20:12:04.823979 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:12:04.823987 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:12:04.824014 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:12:04.824023 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:12:04.824030 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:12:04.824038 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:12:04.824046 | orchestrator | 2025-06-22 20:12:04.824054 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-06-22 20:12:04.824062 | orchestrator | Sunday 22 June 2025 20:09:58 +0000 (0:00:00.936) 0:01:22.556 *********** 2025-06-22 20:12:04.824070 | orchestrator | skipping: [testbed-manager] 2025-06-22 20:12:04.824078 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:12:04.824086 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:12:04.824093 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:12:04.824101 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:12:04.824109 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:12:04.824117 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:12:04.824125 | orchestrator | 2025-06-22 20:12:04.824133 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-06-22 20:12:04.824141 | orchestrator | Sunday 22 June 2025 20:09:59 +0000 (0:00:00.959) 0:01:23.515 *********** 2025-06-22 20:12:04.824154 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-06-22 20:12:04.824169 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 20:12:04.824177 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 20:12:04.824186 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 20:12:04.824200 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:12:04.824209 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 20:12:04.824217 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 20:12:04.824226 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 20:12:04.824247 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 20:12:04.824256 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:12:04.824265 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-06-22 20:12:04.824280 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-06-22 20:12:04.824289 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:12:04.824297 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:12:04.824306 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 20:12:04.824324 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:12:04.824332 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 20:12:04.824341 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:12:04.824354 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 20:12:04.824362 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:12:04.824370 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 20:12:04.824379 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-22 20:12:04.824396 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 20:12:04.824405 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-22 20:12:04.824414 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-06-22 20:12:04.824422 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-06-22 20:12:04.824437 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:12:04.824446 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:12:04.824454 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-06-22 20:12:04.824468 | orchestrator | 2025-06-22 20:12:04.824477 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-06-22 20:12:04.824485 | orchestrator | Sunday 22 June 2025 20:10:04 +0000 (0:00:05.113) 0:01:28.629 *********** 2025-06-22 20:12:04.824493 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-06-22 20:12:04.824501 | orchestrator | skipping: [testbed-manager] 2025-06-22 20:12:04.824510 | orchestrator | 2025-06-22 20:12:04.824518 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-22 20:12:04.824526 | orchestrator | Sunday 22 June 2025 20:10:06 +0000 (0:00:01.446) 0:01:30.076 *********** 2025-06-22 20:12:04.824534 | orchestrator | 2025-06-22 20:12:04.824542 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-22 20:12:04.824550 | orchestrator | Sunday 22 June 2025 20:10:06 +0000 (0:00:00.351) 0:01:30.427 *********** 2025-06-22 20:12:04.824558 | orchestrator | 2025-06-22 20:12:04.824566 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-22 20:12:04.824574 | orchestrator | Sunday 22 June 2025 20:10:06 +0000 (0:00:00.100) 0:01:30.527 *********** 2025-06-22 20:12:04.824582 | orchestrator | 2025-06-22 20:12:04.824593 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-22 20:12:04.824602 | orchestrator | Sunday 22 June 2025 20:10:07 +0000 (0:00:00.119) 0:01:30.647 *********** 2025-06-22 20:12:04.824610 | orchestrator | 2025-06-22 20:12:04.824618 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-22 20:12:04.824626 | orchestrator | Sunday 22 June 2025 20:10:07 +0000 (0:00:00.076) 0:01:30.723 *********** 2025-06-22 20:12:04.824633 | orchestrator | 2025-06-22 20:12:04.824641 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-22 20:12:04.824649 | orchestrator | Sunday 22 June 2025 20:10:07 +0000 (0:00:00.058) 0:01:30.782 *********** 2025-06-22 20:12:04.824657 | orchestrator | 2025-06-22 20:12:04.824665 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-06-22 20:12:04.824673 | orchestrator | Sunday 22 June 2025 20:10:07 +0000 (0:00:00.063) 0:01:30.845 *********** 2025-06-22 20:12:04.824681 | orchestrator | 2025-06-22 20:12:04.824689 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-06-22 20:12:04.824697 | orchestrator | Sunday 22 June 2025 20:10:07 +0000 (0:00:00.086) 0:01:30.931 *********** 2025-06-22 20:12:04.824705 | orchestrator | changed: [testbed-manager] 2025-06-22 20:12:04.824713 | orchestrator | 2025-06-22 20:12:04.824721 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-06-22 20:12:04.824729 | orchestrator | Sunday 22 June 2025 20:10:28 +0000 (0:00:20.889) 0:01:51.821 *********** 2025-06-22 20:12:04.824737 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:12:04.824745 | orchestrator | changed: [testbed-manager] 2025-06-22 20:12:04.824753 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:12:04.824761 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:12:04.824769 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:12:04.824777 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:12:04.824785 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:12:04.824793 | orchestrator | 2025-06-22 20:12:04.824801 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-06-22 20:12:04.824809 | orchestrator | Sunday 22 June 2025 20:10:44 +0000 (0:00:16.232) 0:02:08.054 *********** 2025-06-22 20:12:04.824817 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:12:04.824824 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:12:04.824832 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:12:04.824840 | orchestrator | 2025-06-22 20:12:04.824848 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-06-22 20:12:04.824861 | orchestrator | Sunday 22 June 2025 20:10:52 +0000 (0:00:07.661) 0:02:15.715 *********** 2025-06-22 20:12:04.824869 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:12:04.824877 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:12:04.824885 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:12:04.824892 | orchestrator | 2025-06-22 20:12:04.824900 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-06-22 20:12:04.824909 | orchestrator | Sunday 22 June 2025 20:11:03 +0000 (0:00:11.455) 0:02:27.170 *********** 2025-06-22 20:12:04.824921 | orchestrator | changed: [testbed-manager] 2025-06-22 20:12:04.824930 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:12:04.824937 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:12:04.824945 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:12:04.824953 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:12:04.824961 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:12:04.824969 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:12:04.824977 | orchestrator | 2025-06-22 20:12:04.824985 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-06-22 20:12:04.825073 | orchestrator | Sunday 22 June 2025 20:11:19 +0000 (0:00:15.546) 0:02:42.717 *********** 2025-06-22 20:12:04.825084 | orchestrator | changed: [testbed-manager] 2025-06-22 20:12:04.825092 | orchestrator | 2025-06-22 20:12:04.825100 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-06-22 20:12:04.825108 | orchestrator | Sunday 22 June 2025 20:11:32 +0000 (0:00:13.108) 0:02:55.826 *********** 2025-06-22 20:12:04.825116 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:12:04.825124 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:12:04.825132 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:12:04.825140 | orchestrator | 2025-06-22 20:12:04.825148 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-06-22 20:12:04.825156 | orchestrator | Sunday 22 June 2025 20:11:42 +0000 (0:00:10.674) 0:03:06.501 *********** 2025-06-22 20:12:04.825164 | orchestrator | changed: [testbed-manager] 2025-06-22 20:12:04.825172 | orchestrator | 2025-06-22 20:12:04.825179 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-06-22 20:12:04.825188 | orchestrator | Sunday 22 June 2025 20:11:52 +0000 (0:00:10.029) 0:03:16.530 *********** 2025-06-22 20:12:04.825196 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:12:04.825203 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:12:04.825211 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:12:04.825219 | orchestrator | 2025-06-22 20:12:04.825227 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 20:12:04.825235 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-06-22 20:12:04.825244 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-22 20:12:04.825252 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-22 20:12:04.825260 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-22 20:12:04.825269 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-06-22 20:12:04.825282 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-06-22 20:12:04.825290 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-06-22 20:12:04.825298 | orchestrator | 2025-06-22 20:12:04.825306 | orchestrator | 2025-06-22 20:12:04.825321 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 20:12:04.825330 | orchestrator | Sunday 22 June 2025 20:12:03 +0000 (0:00:10.781) 0:03:27.312 *********** 2025-06-22 20:12:04.825338 | orchestrator | =============================================================================== 2025-06-22 20:12:04.825346 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 27.76s 2025-06-22 20:12:04.825354 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 20.89s 2025-06-22 20:12:04.825361 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 16.23s 2025-06-22 20:12:04.825369 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 15.55s 2025-06-22 20:12:04.825377 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 14.06s 2025-06-22 20:12:04.825385 | orchestrator | prometheus : Restart prometheus-alertmanager container ----------------- 13.11s 2025-06-22 20:12:04.825393 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 11.46s 2025-06-22 20:12:04.825401 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 10.78s 2025-06-22 20:12:04.825409 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 10.67s 2025-06-22 20:12:04.825416 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------ 10.03s 2025-06-22 20:12:04.825424 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container --------------- 7.66s 2025-06-22 20:12:04.825432 | orchestrator | prometheus : Copying over config.json files ----------------------------- 7.57s 2025-06-22 20:12:04.825440 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.52s 2025-06-22 20:12:04.825446 | orchestrator | prometheus : Check prometheus containers -------------------------------- 5.11s 2025-06-22 20:12:04.825453 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 3.12s 2025-06-22 20:12:04.825460 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 2.95s 2025-06-22 20:12:04.825467 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS certificate --- 2.67s 2025-06-22 20:12:04.825479 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 2.40s 2025-06-22 20:12:04.825486 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 1.78s 2025-06-22 20:12:04.825493 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 1.77s 2025-06-22 20:12:04.825500 | orchestrator | 2025-06-22 20:12:04 | INFO  | Task 4136a3fe-1dd6-4ce5-bf47-592f07e29278 is in state STARTED 2025-06-22 20:12:04.825506 | orchestrator | 2025-06-22 20:12:04 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:12:07.847456 | orchestrator | 2025-06-22 20:12:07 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:12:07.847874 | orchestrator | 2025-06-22 20:12:07 | INFO  | Task bdf0a33e-3cf8-4e8a-908e-c467b0db543b is in state STARTED 2025-06-22 20:12:07.848659 | orchestrator | 2025-06-22 20:12:07 | INFO  | Task b2c6a072-e0d1-4613-992c-f570f82e626a is in state STARTED 2025-06-22 20:12:07.849518 | orchestrator | 2025-06-22 20:12:07 | INFO  | Task 4136a3fe-1dd6-4ce5-bf47-592f07e29278 is in state STARTED 2025-06-22 20:12:07.849729 | orchestrator | 2025-06-22 20:12:07 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:12:10.882460 | orchestrator | 2025-06-22 20:12:10 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:12:10.884689 | orchestrator | 2025-06-22 20:12:10 | INFO  | Task bdf0a33e-3cf8-4e8a-908e-c467b0db543b is in state STARTED 2025-06-22 20:12:10.886936 | orchestrator | 2025-06-22 20:12:10 | INFO  | Task b2c6a072-e0d1-4613-992c-f570f82e626a is in state STARTED 2025-06-22 20:12:10.888229 | orchestrator | 2025-06-22 20:12:10 | INFO  | Task 4136a3fe-1dd6-4ce5-bf47-592f07e29278 is in state STARTED 2025-06-22 20:12:10.888283 | orchestrator | 2025-06-22 20:12:10 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:12:13.932452 | orchestrator | 2025-06-22 20:12:13 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:12:13.937434 | orchestrator | 2025-06-22 20:12:13 | INFO  | Task bdf0a33e-3cf8-4e8a-908e-c467b0db543b is in state STARTED 2025-06-22 20:12:13.941396 | orchestrator | 2025-06-22 20:12:13 | INFO  | Task b2c6a072-e0d1-4613-992c-f570f82e626a is in state STARTED 2025-06-22 20:12:13.944534 | orchestrator | 2025-06-22 20:12:13 | INFO  | Task 4136a3fe-1dd6-4ce5-bf47-592f07e29278 is in state STARTED 2025-06-22 20:12:13.944564 | orchestrator | 2025-06-22 20:12:13 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:12:17.000526 | orchestrator | 2025-06-22 20:12:16 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:12:17.003830 | orchestrator | 2025-06-22 20:12:17 | INFO  | Task bdf0a33e-3cf8-4e8a-908e-c467b0db543b is in state STARTED 2025-06-22 20:12:17.005782 | orchestrator | 2025-06-22 20:12:17 | INFO  | Task b2c6a072-e0d1-4613-992c-f570f82e626a is in state STARTED 2025-06-22 20:12:17.007248 | orchestrator | 2025-06-22 20:12:17 | INFO  | Task 4136a3fe-1dd6-4ce5-bf47-592f07e29278 is in state STARTED 2025-06-22 20:12:17.007310 | orchestrator | 2025-06-22 20:12:17 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:12:20.048133 | orchestrator | 2025-06-22 20:12:20 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:12:20.050352 | orchestrator | 2025-06-22 20:12:20 | INFO  | Task bdf0a33e-3cf8-4e8a-908e-c467b0db543b is in state STARTED 2025-06-22 20:12:20.051548 | orchestrator | 2025-06-22 20:12:20 | INFO  | Task b2c6a072-e0d1-4613-992c-f570f82e626a is in state STARTED 2025-06-22 20:12:20.054658 | orchestrator | 2025-06-22 20:12:20 | INFO  | Task 4136a3fe-1dd6-4ce5-bf47-592f07e29278 is in state STARTED 2025-06-22 20:12:20.054709 | orchestrator | 2025-06-22 20:12:20 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:12:23.104412 | orchestrator | 2025-06-22 20:12:23 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:12:23.105906 | orchestrator | 2025-06-22 20:12:23 | INFO  | Task bdf0a33e-3cf8-4e8a-908e-c467b0db543b is in state STARTED 2025-06-22 20:12:23.107854 | orchestrator | 2025-06-22 20:12:23 | INFO  | Task b4f7c02c-ce38-41e1-98b8-c3ec8dd85194 is in state STARTED 2025-06-22 20:12:23.110678 | orchestrator | 2025-06-22 20:12:23 | INFO  | Task b2c6a072-e0d1-4613-992c-f570f82e626a is in state SUCCESS 2025-06-22 20:12:23.111999 | orchestrator | 2025-06-22 20:12:23.112040 | orchestrator | 2025-06-22 20:12:23.112054 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 20:12:23.112066 | orchestrator | 2025-06-22 20:12:23.112078 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 20:12:23.112089 | orchestrator | Sunday 22 June 2025 20:09:31 +0000 (0:00:00.244) 0:00:00.244 *********** 2025-06-22 20:12:23.112101 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:12:23.112113 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:12:23.112164 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:12:23.112177 | orchestrator | 2025-06-22 20:12:23.112188 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 20:12:23.112199 | orchestrator | Sunday 22 June 2025 20:09:31 +0000 (0:00:00.334) 0:00:00.579 *********** 2025-06-22 20:12:23.112211 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-06-22 20:12:23.112223 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-06-22 20:12:23.112234 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-06-22 20:12:23.112245 | orchestrator | 2025-06-22 20:12:23.112256 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-06-22 20:12:23.112295 | orchestrator | 2025-06-22 20:12:23.112307 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-06-22 20:12:23.112318 | orchestrator | Sunday 22 June 2025 20:09:32 +0000 (0:00:00.641) 0:00:01.221 *********** 2025-06-22 20:12:23.112329 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:12:23.112341 | orchestrator | 2025-06-22 20:12:23.112352 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-06-22 20:12:23.112363 | orchestrator | Sunday 22 June 2025 20:09:32 +0000 (0:00:00.589) 0:00:01.811 *********** 2025-06-22 20:12:23.112374 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-06-22 20:12:23.112385 | orchestrator | 2025-06-22 20:12:23.112397 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-06-22 20:12:23.112408 | orchestrator | Sunday 22 June 2025 20:09:36 +0000 (0:00:03.709) 0:00:05.520 *********** 2025-06-22 20:12:23.112419 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-06-22 20:12:23.112430 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-06-22 20:12:23.112441 | orchestrator | 2025-06-22 20:12:23.112452 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-06-22 20:12:23.112463 | orchestrator | Sunday 22 June 2025 20:09:43 +0000 (0:00:06.830) 0:00:12.351 *********** 2025-06-22 20:12:23.112474 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-22 20:12:23.112486 | orchestrator | 2025-06-22 20:12:23.112497 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-06-22 20:12:23.112508 | orchestrator | Sunday 22 June 2025 20:09:46 +0000 (0:00:02.816) 0:00:15.167 *********** 2025-06-22 20:12:23.112519 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-22 20:12:23.112530 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-06-22 20:12:23.112542 | orchestrator | 2025-06-22 20:12:23.112552 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-06-22 20:12:23.112563 | orchestrator | Sunday 22 June 2025 20:09:50 +0000 (0:00:04.289) 0:00:19.456 *********** 2025-06-22 20:12:23.112574 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-22 20:12:23.112585 | orchestrator | 2025-06-22 20:12:23.112596 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-06-22 20:12:23.112621 | orchestrator | Sunday 22 June 2025 20:09:54 +0000 (0:00:03.633) 0:00:23.090 *********** 2025-06-22 20:12:23.112633 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-06-22 20:12:23.112644 | orchestrator | 2025-06-22 20:12:23.112655 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-06-22 20:12:23.112666 | orchestrator | Sunday 22 June 2025 20:09:57 +0000 (0:00:03.655) 0:00:26.746 *********** 2025-06-22 20:12:23.112698 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-22 20:12:23.112725 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-22 20:12:23.112744 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-22 20:12:23.112763 | orchestrator | 2025-06-22 20:12:23.112775 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-06-22 20:12:23.112786 | orchestrator | Sunday 22 June 2025 20:10:01 +0000 (0:00:03.832) 0:00:30.578 *********** 2025-06-22 20:12:23.112803 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:12:23.112816 | orchestrator | 2025-06-22 20:12:23.112827 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-06-22 20:12:23.112838 | orchestrator | Sunday 22 June 2025 20:10:02 +0000 (0:00:00.997) 0:00:31.575 *********** 2025-06-22 20:12:23.112849 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:12:23.112860 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:12:23.112871 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:12:23.112882 | orchestrator | 2025-06-22 20:12:23.112893 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-06-22 20:12:23.112904 | orchestrator | Sunday 22 June 2025 20:10:07 +0000 (0:00:04.809) 0:00:36.385 *********** 2025-06-22 20:12:23.112915 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-22 20:12:23.112926 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-22 20:12:23.112937 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-22 20:12:23.112947 | orchestrator | 2025-06-22 20:12:23.112984 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-06-22 20:12:23.112996 | orchestrator | Sunday 22 June 2025 20:10:08 +0000 (0:00:01.520) 0:00:37.906 *********** 2025-06-22 20:12:23.113006 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-22 20:12:23.113052 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-22 20:12:23.113064 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-06-22 20:12:23.113075 | orchestrator | 2025-06-22 20:12:23.113086 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-06-22 20:12:23.113097 | orchestrator | Sunday 22 June 2025 20:10:10 +0000 (0:00:01.120) 0:00:39.026 *********** 2025-06-22 20:12:23.113108 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:12:23.113119 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:12:23.113131 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:12:23.113141 | orchestrator | 2025-06-22 20:12:23.113152 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-06-22 20:12:23.113163 | orchestrator | Sunday 22 June 2025 20:10:10 +0000 (0:00:00.780) 0:00:39.807 *********** 2025-06-22 20:12:23.113174 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:12:23.113184 | orchestrator | 2025-06-22 20:12:23.113196 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-06-22 20:12:23.113207 | orchestrator | Sunday 22 June 2025 20:10:10 +0000 (0:00:00.127) 0:00:39.935 *********** 2025-06-22 20:12:23.113218 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:12:23.113255 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:12:23.113266 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:12:23.113277 | orchestrator | 2025-06-22 20:12:23.113288 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-06-22 20:12:23.113299 | orchestrator | Sunday 22 June 2025 20:10:11 +0000 (0:00:00.280) 0:00:40.215 *********** 2025-06-22 20:12:23.113310 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:12:23.113321 | orchestrator | 2025-06-22 20:12:23.113332 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-06-22 20:12:23.113349 | orchestrator | Sunday 22 June 2025 20:10:11 +0000 (0:00:00.478) 0:00:40.694 *********** 2025-06-22 20:12:23.113378 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-22 20:12:23.113393 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-22 20:12:23.113411 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-22 20:12:23.113430 | orchestrator | 2025-06-22 20:12:23.113441 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-06-22 20:12:23.113452 | orchestrator | Sunday 22 June 2025 20:10:15 +0000 (0:00:03.433) 0:00:44.128 *********** 2025-06-22 20:12:23.113474 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-22 20:12:23.113487 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:12:23.113504 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-22 20:12:23.113523 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:12:23.113543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-22 20:12:23.113556 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:12:23.113567 | orchestrator | 2025-06-22 20:12:23.113578 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-06-22 20:12:23.113589 | orchestrator | Sunday 22 June 2025 20:10:18 +0000 (0:00:03.364) 0:00:47.493 *********** 2025-06-22 20:12:23.113606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-22 20:12:23.113624 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:12:23.113643 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-22 20:12:23.113655 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:12:23.113667 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-06-22 20:12:23.113685 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:12:23.113696 | orchestrator | 2025-06-22 20:12:23.113707 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-06-22 20:12:23.113718 | orchestrator | Sunday 22 June 2025 20:10:23 +0000 (0:00:04.681) 0:00:52.174 *********** 2025-06-22 20:12:23.113729 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:12:23.113740 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:12:23.113752 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:12:23.113762 | orchestrator | 2025-06-22 20:12:23.113785 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-06-22 20:12:23.113796 | orchestrator | Sunday 22 June 2025 20:10:30 +0000 (0:00:07.295) 0:00:59.470 *********** 2025-06-22 20:12:23.113816 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-22 20:12:23.113829 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-22 20:12:23.113854 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-22 20:12:23.113867 | orchestrator | 2025-06-22 20:12:23.113878 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2025-06-22 20:12:23.113889 | orchestrator | Sunday 22 June 2025 20:10:37 +0000 (0:00:07.408) 0:01:06.879 *********** 2025-06-22 20:12:23.113900 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:12:23.113911 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:12:23.113922 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:12:23.113933 | orchestrator | 2025-06-22 20:12:23.113944 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2025-06-22 20:12:23.114186 | orchestrator | Sunday 22 June 2025 20:10:43 +0000 (0:00:05.590) 0:01:12.470 *********** 2025-06-22 20:12:23.114206 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:12:23.114218 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:12:23.114229 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:12:23.114240 | orchestrator | 2025-06-22 20:12:23.114252 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2025-06-22 20:12:23.114263 | orchestrator | Sunday 22 June 2025 20:10:49 +0000 (0:00:05.594) 0:01:18.064 *********** 2025-06-22 20:12:23.114274 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:12:23.114284 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:12:23.114295 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:12:23.114306 | orchestrator | 2025-06-22 20:12:23.114317 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2025-06-22 20:12:23.114328 | orchestrator | Sunday 22 June 2025 20:10:53 +0000 (0:00:04.002) 0:01:22.067 *********** 2025-06-22 20:12:23.114339 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:12:23.114350 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:12:23.114361 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:12:23.114372 | orchestrator | 2025-06-22 20:12:23.114383 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2025-06-22 20:12:23.114405 | orchestrator | Sunday 22 June 2025 20:10:59 +0000 (0:00:06.071) 0:01:28.138 *********** 2025-06-22 20:12:23.114416 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:12:23.114426 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:12:23.114437 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:12:23.114448 | orchestrator | 2025-06-22 20:12:23.114459 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2025-06-22 20:12:23.114471 | orchestrator | Sunday 22 June 2025 20:11:05 +0000 (0:00:05.953) 0:01:34.092 *********** 2025-06-22 20:12:23.114481 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:12:23.114492 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:12:23.114503 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:12:23.114514 | orchestrator | 2025-06-22 20:12:23.114525 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2025-06-22 20:12:23.114536 | orchestrator | Sunday 22 June 2025 20:11:05 +0000 (0:00:00.844) 0:01:34.937 *********** 2025-06-22 20:12:23.114547 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-06-22 20:12:23.114558 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:12:23.114569 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-06-22 20:12:23.114580 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:12:23.114591 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-06-22 20:12:23.114602 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:12:23.114613 | orchestrator | 2025-06-22 20:12:23.114624 | orchestrator | TASK [glance : Check glance containers] **************************************** 2025-06-22 20:12:23.114635 | orchestrator | Sunday 22 June 2025 20:11:12 +0000 (0:00:06.234) 0:01:41.171 *********** 2025-06-22 20:12:23.114654 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-22 20:12:23.114677 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-22 20:12:23.114739 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250530', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-06-22 20:12:23.114753 | orchestrator | 2025-06-22 20:12:23.114764 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-06-22 20:12:23.114775 | orchestrator | Sunday 22 June 2025 20:11:16 +0000 (0:00:04.421) 0:01:45.592 *********** 2025-06-22 20:12:23.114786 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:12:23.114797 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:12:23.114808 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:12:23.114821 | orchestrator | 2025-06-22 20:12:23.114833 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2025-06-22 20:12:23.114845 | orchestrator | Sunday 22 June 2025 20:11:16 +0000 (0:00:00.295) 0:01:45.888 *********** 2025-06-22 20:12:23.114857 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:12:23.114869 | orchestrator | 2025-06-22 20:12:23.114881 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2025-06-22 20:12:23.114894 | orchestrator | Sunday 22 June 2025 20:11:18 +0000 (0:00:01.850) 0:01:47.739 *********** 2025-06-22 20:12:23.114913 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:12:23.114925 | orchestrator | 2025-06-22 20:12:23.114938 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2025-06-22 20:12:23.114976 | orchestrator | Sunday 22 June 2025 20:11:20 +0000 (0:00:02.179) 0:01:49.918 *********** 2025-06-22 20:12:23.114990 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:12:23.115002 | orchestrator | 2025-06-22 20:12:23.115015 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2025-06-22 20:12:23.115033 | orchestrator | Sunday 22 June 2025 20:11:22 +0000 (0:00:02.072) 0:01:51.991 *********** 2025-06-22 20:12:23.115046 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:12:23.115058 | orchestrator | 2025-06-22 20:12:23.115071 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2025-06-22 20:12:23.115083 | orchestrator | Sunday 22 June 2025 20:11:48 +0000 (0:00:25.467) 0:02:17.458 *********** 2025-06-22 20:12:23.115095 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:12:23.115107 | orchestrator | 2025-06-22 20:12:23.115119 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-06-22 20:12:23.115131 | orchestrator | Sunday 22 June 2025 20:11:50 +0000 (0:00:02.374) 0:02:19.832 *********** 2025-06-22 20:12:23.115143 | orchestrator | 2025-06-22 20:12:23.115155 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-06-22 20:12:23.115168 | orchestrator | Sunday 22 June 2025 20:11:50 +0000 (0:00:00.055) 0:02:19.888 *********** 2025-06-22 20:12:23.115180 | orchestrator | 2025-06-22 20:12:23.115191 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-06-22 20:12:23.115202 | orchestrator | Sunday 22 June 2025 20:11:50 +0000 (0:00:00.056) 0:02:19.945 *********** 2025-06-22 20:12:23.115212 | orchestrator | 2025-06-22 20:12:23.115223 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2025-06-22 20:12:23.115234 | orchestrator | Sunday 22 June 2025 20:11:50 +0000 (0:00:00.059) 0:02:20.004 *********** 2025-06-22 20:12:23.115245 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:12:23.115256 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:12:23.115267 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:12:23.115277 | orchestrator | 2025-06-22 20:12:23.115288 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 20:12:23.115300 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-06-22 20:12:23.115313 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-22 20:12:23.115324 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-22 20:12:23.115335 | orchestrator | 2025-06-22 20:12:23.115345 | orchestrator | 2025-06-22 20:12:23.115356 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 20:12:23.115367 | orchestrator | Sunday 22 June 2025 20:12:21 +0000 (0:00:30.046) 0:02:50.050 *********** 2025-06-22 20:12:23.115377 | orchestrator | =============================================================================== 2025-06-22 20:12:23.115388 | orchestrator | glance : Restart glance-api container ---------------------------------- 30.05s 2025-06-22 20:12:23.115399 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 25.47s 2025-06-22 20:12:23.115410 | orchestrator | glance : Copying over config.json files for services -------------------- 7.41s 2025-06-22 20:12:23.115420 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 7.30s 2025-06-22 20:12:23.115431 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.83s 2025-06-22 20:12:23.115442 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 6.23s 2025-06-22 20:12:23.115453 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 6.07s 2025-06-22 20:12:23.115468 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 5.95s 2025-06-22 20:12:23.115487 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 5.59s 2025-06-22 20:12:23.115498 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 5.59s 2025-06-22 20:12:23.115508 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 4.81s 2025-06-22 20:12:23.115519 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 4.68s 2025-06-22 20:12:23.115530 | orchestrator | glance : Check glance containers ---------------------------------------- 4.42s 2025-06-22 20:12:23.115540 | orchestrator | service-ks-register : glance | Creating users --------------------------- 4.29s 2025-06-22 20:12:23.115551 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 4.00s 2025-06-22 20:12:23.115562 | orchestrator | glance : Ensuring config directories exist ------------------------------ 3.83s 2025-06-22 20:12:23.115572 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.71s 2025-06-22 20:12:23.115583 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 3.66s 2025-06-22 20:12:23.115594 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 3.63s 2025-06-22 20:12:23.115604 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 3.43s 2025-06-22 20:12:23.115616 | orchestrator | 2025-06-22 20:12:23 | INFO  | Task 4136a3fe-1dd6-4ce5-bf47-592f07e29278 is in state STARTED 2025-06-22 20:12:23.115627 | orchestrator | 2025-06-22 20:12:23 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:12:26.158634 | orchestrator | 2025-06-22 20:12:26 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:12:26.159548 | orchestrator | 2025-06-22 20:12:26 | INFO  | Task bdf0a33e-3cf8-4e8a-908e-c467b0db543b is in state STARTED 2025-06-22 20:12:26.162479 | orchestrator | 2025-06-22 20:12:26 | INFO  | Task b4f7c02c-ce38-41e1-98b8-c3ec8dd85194 is in state STARTED 2025-06-22 20:12:26.163316 | orchestrator | 2025-06-22 20:12:26 | INFO  | Task 4136a3fe-1dd6-4ce5-bf47-592f07e29278 is in state STARTED 2025-06-22 20:12:26.163370 | orchestrator | 2025-06-22 20:12:26 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:12:29.203506 | orchestrator | 2025-06-22 20:12:29 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:12:29.204807 | orchestrator | 2025-06-22 20:12:29 | INFO  | Task bdf0a33e-3cf8-4e8a-908e-c467b0db543b is in state STARTED 2025-06-22 20:12:29.207078 | orchestrator | 2025-06-22 20:12:29 | INFO  | Task b4f7c02c-ce38-41e1-98b8-c3ec8dd85194 is in state STARTED 2025-06-22 20:12:29.210192 | orchestrator | 2025-06-22 20:12:29 | INFO  | Task 4136a3fe-1dd6-4ce5-bf47-592f07e29278 is in state STARTED 2025-06-22 20:12:29.210247 | orchestrator | 2025-06-22 20:12:29 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:12:32.255151 | orchestrator | 2025-06-22 20:12:32 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:12:32.256986 | orchestrator | 2025-06-22 20:12:32 | INFO  | Task bdf0a33e-3cf8-4e8a-908e-c467b0db543b is in state STARTED 2025-06-22 20:12:32.259604 | orchestrator | 2025-06-22 20:12:32 | INFO  | Task b4f7c02c-ce38-41e1-98b8-c3ec8dd85194 is in state STARTED 2025-06-22 20:12:32.262094 | orchestrator | 2025-06-22 20:12:32 | INFO  | Task 4136a3fe-1dd6-4ce5-bf47-592f07e29278 is in state STARTED 2025-06-22 20:12:32.262134 | orchestrator | 2025-06-22 20:12:32 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:12:35.311678 | orchestrator | 2025-06-22 20:12:35 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:12:35.313145 | orchestrator | 2025-06-22 20:12:35 | INFO  | Task bdf0a33e-3cf8-4e8a-908e-c467b0db543b is in state STARTED 2025-06-22 20:12:35.315227 | orchestrator | 2025-06-22 20:12:35 | INFO  | Task b4f7c02c-ce38-41e1-98b8-c3ec8dd85194 is in state STARTED 2025-06-22 20:12:35.317039 | orchestrator | 2025-06-22 20:12:35 | INFO  | Task 4136a3fe-1dd6-4ce5-bf47-592f07e29278 is in state STARTED 2025-06-22 20:12:35.317095 | orchestrator | 2025-06-22 20:12:35 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:12:38.362963 | orchestrator | 2025-06-22 20:12:38 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:12:38.364062 | orchestrator | 2025-06-22 20:12:38 | INFO  | Task bdf0a33e-3cf8-4e8a-908e-c467b0db543b is in state STARTED 2025-06-22 20:12:38.365504 | orchestrator | 2025-06-22 20:12:38 | INFO  | Task b4f7c02c-ce38-41e1-98b8-c3ec8dd85194 is in state STARTED 2025-06-22 20:12:38.367109 | orchestrator | 2025-06-22 20:12:38 | INFO  | Task 4136a3fe-1dd6-4ce5-bf47-592f07e29278 is in state STARTED 2025-06-22 20:12:38.367132 | orchestrator | 2025-06-22 20:12:38 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:12:41.411865 | orchestrator | 2025-06-22 20:12:41 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:12:41.413431 | orchestrator | 2025-06-22 20:12:41 | INFO  | Task bdf0a33e-3cf8-4e8a-908e-c467b0db543b is in state STARTED 2025-06-22 20:12:41.416461 | orchestrator | 2025-06-22 20:12:41 | INFO  | Task b4f7c02c-ce38-41e1-98b8-c3ec8dd85194 is in state STARTED 2025-06-22 20:12:41.418659 | orchestrator | 2025-06-22 20:12:41 | INFO  | Task 4136a3fe-1dd6-4ce5-bf47-592f07e29278 is in state STARTED 2025-06-22 20:12:41.418692 | orchestrator | 2025-06-22 20:12:41 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:12:44.464216 | orchestrator | 2025-06-22 20:12:44 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:12:44.466330 | orchestrator | 2025-06-22 20:12:44 | INFO  | Task bdf0a33e-3cf8-4e8a-908e-c467b0db543b is in state STARTED 2025-06-22 20:12:44.468496 | orchestrator | 2025-06-22 20:12:44 | INFO  | Task b4f7c02c-ce38-41e1-98b8-c3ec8dd85194 is in state STARTED 2025-06-22 20:12:44.470351 | orchestrator | 2025-06-22 20:12:44 | INFO  | Task 4136a3fe-1dd6-4ce5-bf47-592f07e29278 is in state STARTED 2025-06-22 20:12:44.470457 | orchestrator | 2025-06-22 20:12:44 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:12:47.513607 | orchestrator | 2025-06-22 20:12:47 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:12:47.513778 | orchestrator | 2025-06-22 20:12:47 | INFO  | Task bdf0a33e-3cf8-4e8a-908e-c467b0db543b is in state STARTED 2025-06-22 20:12:47.513874 | orchestrator | 2025-06-22 20:12:47 | INFO  | Task b4f7c02c-ce38-41e1-98b8-c3ec8dd85194 is in state STARTED 2025-06-22 20:12:47.516600 | orchestrator | 2025-06-22 20:12:47 | INFO  | Task 4136a3fe-1dd6-4ce5-bf47-592f07e29278 is in state STARTED 2025-06-22 20:12:47.516627 | orchestrator | 2025-06-22 20:12:47 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:12:50.557635 | orchestrator | 2025-06-22 20:12:50 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:12:50.560375 | orchestrator | 2025-06-22 20:12:50 | INFO  | Task bdf0a33e-3cf8-4e8a-908e-c467b0db543b is in state STARTED 2025-06-22 20:12:50.562678 | orchestrator | 2025-06-22 20:12:50 | INFO  | Task b4f7c02c-ce38-41e1-98b8-c3ec8dd85194 is in state STARTED 2025-06-22 20:12:50.563382 | orchestrator | 2025-06-22 20:12:50 | INFO  | Task 4136a3fe-1dd6-4ce5-bf47-592f07e29278 is in state STARTED 2025-06-22 20:12:50.563415 | orchestrator | 2025-06-22 20:12:50 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:12:53.608755 | orchestrator | 2025-06-22 20:12:53 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:12:53.611115 | orchestrator | 2025-06-22 20:12:53 | INFO  | Task bdf0a33e-3cf8-4e8a-908e-c467b0db543b is in state STARTED 2025-06-22 20:12:53.613443 | orchestrator | 2025-06-22 20:12:53 | INFO  | Task b4f7c02c-ce38-41e1-98b8-c3ec8dd85194 is in state STARTED 2025-06-22 20:12:53.615323 | orchestrator | 2025-06-22 20:12:53 | INFO  | Task 4136a3fe-1dd6-4ce5-bf47-592f07e29278 is in state STARTED 2025-06-22 20:12:53.615685 | orchestrator | 2025-06-22 20:12:53 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:12:56.658712 | orchestrator | 2025-06-22 20:12:56 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:12:56.660367 | orchestrator | 2025-06-22 20:12:56 | INFO  | Task bdf0a33e-3cf8-4e8a-908e-c467b0db543b is in state STARTED 2025-06-22 20:12:56.662145 | orchestrator | 2025-06-22 20:12:56 | INFO  | Task b4f7c02c-ce38-41e1-98b8-c3ec8dd85194 is in state STARTED 2025-06-22 20:12:56.666167 | orchestrator | 2025-06-22 20:12:56 | INFO  | Task 4136a3fe-1dd6-4ce5-bf47-592f07e29278 is in state SUCCESS 2025-06-22 20:12:56.668146 | orchestrator | 2025-06-22 20:12:56.668198 | orchestrator | 2025-06-22 20:12:56.668212 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 20:12:56.668224 | orchestrator | 2025-06-22 20:12:56.668236 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 20:12:56.668248 | orchestrator | Sunday 22 June 2025 20:09:45 +0000 (0:00:00.321) 0:00:00.321 *********** 2025-06-22 20:12:56.668259 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:12:56.668284 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:12:56.668296 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:12:56.668307 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:12:56.668318 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:12:56.668329 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:12:56.668340 | orchestrator | 2025-06-22 20:12:56.668351 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 20:12:56.668363 | orchestrator | Sunday 22 June 2025 20:09:45 +0000 (0:00:00.743) 0:00:01.064 *********** 2025-06-22 20:12:56.668374 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-06-22 20:12:56.668386 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-06-22 20:12:56.668398 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-06-22 20:12:56.668409 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-06-22 20:12:56.668420 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-06-22 20:12:56.668432 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-06-22 20:12:56.668443 | orchestrator | 2025-06-22 20:12:56.668454 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-06-22 20:12:56.668465 | orchestrator | 2025-06-22 20:12:56.668476 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-06-22 20:12:56.668488 | orchestrator | Sunday 22 June 2025 20:09:46 +0000 (0:00:00.667) 0:00:01.731 *********** 2025-06-22 20:12:56.668500 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 20:12:56.668514 | orchestrator | 2025-06-22 20:12:56.668525 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-06-22 20:12:56.668536 | orchestrator | Sunday 22 June 2025 20:09:48 +0000 (0:00:01.630) 0:00:03.362 *********** 2025-06-22 20:12:56.668549 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-06-22 20:12:56.668560 | orchestrator | 2025-06-22 20:12:56.668571 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-06-22 20:12:56.668582 | orchestrator | Sunday 22 June 2025 20:09:52 +0000 (0:00:03.776) 0:00:07.138 *********** 2025-06-22 20:12:56.668593 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-06-22 20:12:56.668634 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-06-22 20:12:56.668646 | orchestrator | 2025-06-22 20:12:56.668658 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-06-22 20:12:56.668671 | orchestrator | Sunday 22 June 2025 20:09:58 +0000 (0:00:06.564) 0:00:13.703 *********** 2025-06-22 20:12:56.668683 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-22 20:12:56.668696 | orchestrator | 2025-06-22 20:12:56.668708 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-06-22 20:12:56.668720 | orchestrator | Sunday 22 June 2025 20:10:02 +0000 (0:00:03.524) 0:00:17.227 *********** 2025-06-22 20:12:56.668733 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-22 20:12:56.668745 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-06-22 20:12:56.668757 | orchestrator | 2025-06-22 20:12:56.668770 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-06-22 20:12:56.668782 | orchestrator | Sunday 22 June 2025 20:10:06 +0000 (0:00:04.152) 0:00:21.380 *********** 2025-06-22 20:12:56.668794 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-22 20:12:56.668806 | orchestrator | 2025-06-22 20:12:56.669090 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-06-22 20:12:56.669105 | orchestrator | Sunday 22 June 2025 20:10:09 +0000 (0:00:03.459) 0:00:24.840 *********** 2025-06-22 20:12:56.669117 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-06-22 20:12:56.669128 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-06-22 20:12:56.669139 | orchestrator | 2025-06-22 20:12:56.669150 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-06-22 20:12:56.669162 | orchestrator | Sunday 22 June 2025 20:10:17 +0000 (0:00:07.677) 0:00:32.518 *********** 2025-06-22 20:12:56.669214 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-22 20:12:56.669232 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-22 20:12:56.669244 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-22 20:12:56.669267 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-22 20:12:56.669280 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 20:12:56.669294 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 20:12:56.669315 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 20:12:56.669327 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-22 20:12:56.669346 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-22 20:12:56.669359 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-22 20:12:56.669370 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-22 20:12:56.669422 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-22 20:12:56.669435 | orchestrator | 2025-06-22 20:12:56.669446 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-06-22 20:12:56.669458 | orchestrator | Sunday 22 June 2025 20:10:20 +0000 (0:00:02.951) 0:00:35.469 *********** 2025-06-22 20:12:56.669469 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:12:56.669511 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:12:56.669525 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:12:56.669536 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:12:56.669547 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:12:56.669558 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:12:56.669577 | orchestrator | 2025-06-22 20:12:56.669588 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-06-22 20:12:56.669599 | orchestrator | Sunday 22 June 2025 20:10:21 +0000 (0:00:01.149) 0:00:36.619 *********** 2025-06-22 20:12:56.669611 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:12:56.669622 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:12:56.669633 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:12:56.669644 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 20:12:56.669655 | orchestrator | 2025-06-22 20:12:56.669667 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-06-22 20:12:56.669708 | orchestrator | Sunday 22 June 2025 20:10:22 +0000 (0:00:01.508) 0:00:38.127 *********** 2025-06-22 20:12:56.669719 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-06-22 20:12:56.669731 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-06-22 20:12:56.669742 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-06-22 20:12:56.669753 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-06-22 20:12:56.669764 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-06-22 20:12:56.669775 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-06-22 20:12:56.669786 | orchestrator | 2025-06-22 20:12:56.669797 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-06-22 20:12:56.669808 | orchestrator | Sunday 22 June 2025 20:10:25 +0000 (0:00:02.948) 0:00:41.076 *********** 2025-06-22 20:12:56.669821 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-22 20:12:56.669834 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-22 20:12:56.669854 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-22 20:12:56.669874 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-22 20:12:56.669887 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-22 20:12:56.669899 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-06-22 20:12:56.669929 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-22 20:12:56.669948 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-22 20:12:56.669967 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-22 20:12:56.669980 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-22 20:12:56.669993 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-22 20:12:56.670005 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-06-22 20:12:56.670068 | orchestrator | 2025-06-22 20:12:56.670084 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-06-22 20:12:56.670096 | orchestrator | Sunday 22 June 2025 20:10:32 +0000 (0:00:06.280) 0:00:47.357 *********** 2025-06-22 20:12:56.670107 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-06-22 20:12:56.670136 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-06-22 20:12:56.670148 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-06-22 20:12:56.670159 | orchestrator | 2025-06-22 20:12:56.670170 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-06-22 20:12:56.670181 | orchestrator | Sunday 22 June 2025 20:10:35 +0000 (0:00:03.297) 0:00:50.655 *********** 2025-06-22 20:12:56.670200 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-06-22 20:12:56.670212 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-06-22 20:12:56.670223 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-06-22 20:12:56.670234 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-06-22 20:12:56.670245 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-06-22 20:12:56.670256 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-06-22 20:12:56.670267 | orchestrator | 2025-06-22 20:12:56.670278 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-06-22 20:12:56.670289 | orchestrator | Sunday 22 June 2025 20:10:38 +0000 (0:00:03.237) 0:00:53.892 *********** 2025-06-22 20:12:56.670300 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-06-22 20:12:56.670311 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-06-22 20:12:56.670322 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-06-22 20:12:56.670333 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-06-22 20:12:56.670344 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-06-22 20:12:56.670355 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-06-22 20:12:56.670366 | orchestrator | 2025-06-22 20:12:56.670377 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-06-22 20:12:56.670388 | orchestrator | Sunday 22 June 2025 20:10:39 +0000 (0:00:01.069) 0:00:54.962 *********** 2025-06-22 20:12:56.670399 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:12:56.670411 | orchestrator | 2025-06-22 20:12:56.670422 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-06-22 20:12:56.670433 | orchestrator | Sunday 22 June 2025 20:10:39 +0000 (0:00:00.155) 0:00:55.118 *********** 2025-06-22 20:12:56.670444 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:12:56.670455 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:12:56.670465 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:12:56.670476 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:12:56.670487 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:12:56.670498 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:12:56.670509 | orchestrator | 2025-06-22 20:12:56.670520 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-06-22 20:12:56.670531 | orchestrator | Sunday 22 June 2025 20:10:40 +0000 (0:00:00.679) 0:00:55.797 *********** 2025-06-22 20:12:56.670543 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 20:12:56.670556 | orchestrator | 2025-06-22 20:12:56.670567 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-06-22 20:12:56.670578 | orchestrator | Sunday 22 June 2025 20:10:41 +0000 (0:00:01.104) 0:00:56.902 *********** 2025-06-22 20:12:56.670590 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-22 20:12:56.670610 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-22 20:12:56.670632 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-22 20:12:56.670645 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-22 20:12:56.670657 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-22 20:12:56.670676 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-22 20:12:56.670688 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 20:12:56.670717 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 20:12:56.670730 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 20:12:56.670741 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-22 20:12:56.670753 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-22 20:12:56.670772 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-22 20:12:56.670783 | orchestrator | 2025-06-22 20:12:56.670794 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-06-22 20:12:56.670806 | orchestrator | Sunday 22 June 2025 20:10:44 +0000 (0:00:03.122) 0:01:00.024 *********** 2025-06-22 20:12:56.670824 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-22 20:12:56.670836 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-22 20:12:56.670848 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 20:12:56.670860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 20:12:56.670878 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:12:56.670890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-22 20:12:56.670902 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 20:12:56.670972 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:12:56.670984 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:12:56.671003 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-22 20:12:56.671015 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-22 20:12:56.671027 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:12:56.671039 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-22 20:12:56.671057 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-22 20:12:56.671069 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:12:56.671081 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-22 20:12:56.671102 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-22 20:12:56.671113 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:12:56.671125 | orchestrator | 2025-06-22 20:12:56.671136 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-06-22 20:12:56.671148 | orchestrator | Sunday 22 June 2025 20:10:47 +0000 (0:00:02.291) 0:01:02.316 *********** 2025-06-22 20:12:56.671159 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-22 20:12:56.671178 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 20:12:56.671188 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:12:56.671198 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-22 20:12:56.671209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 20:12:56.671219 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:12:56.671236 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-22 20:12:56.671246 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 20:12:56.671257 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:12:56.671273 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-22 20:12:56.671283 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-22 20:12:56.671294 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:12:56.671304 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-22 20:12:56.671320 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-22 20:12:56.671330 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:12:56.671341 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-22 20:12:56.671357 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-22 20:12:56.671368 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:12:56.671378 | orchestrator | 2025-06-22 20:12:56.671388 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-06-22 20:12:56.671398 | orchestrator | Sunday 22 June 2025 20:10:49 +0000 (0:00:02.022) 0:01:04.338 *********** 2025-06-22 20:12:56.671408 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-22 20:12:56.671419 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-22 20:12:56.671436 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-22 20:12:56.671456 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-22 20:12:56.671467 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-22 20:12:56.671478 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-22 20:12:56.671488 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 20:12:56.671505 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 20:12:56.671516 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 20:12:56.671533 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-22 20:12:56.671544 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-22 20:12:56.671554 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-22 20:12:56.671564 | orchestrator | 2025-06-22 20:12:56.671574 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-06-22 20:12:56.671584 | orchestrator | Sunday 22 June 2025 20:10:52 +0000 (0:00:03.379) 0:01:07.718 *********** 2025-06-22 20:12:56.671594 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-06-22 20:12:56.671604 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-06-22 20:12:56.671614 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-06-22 20:12:56.671624 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:12:56.671635 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-06-22 20:12:56.671645 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:12:56.671655 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-06-22 20:12:56.671665 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:12:56.671680 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-06-22 20:12:56.671690 | orchestrator | 2025-06-22 20:12:56.671700 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-06-22 20:12:56.671716 | orchestrator | Sunday 22 June 2025 20:10:55 +0000 (0:00:02.695) 0:01:10.414 *********** 2025-06-22 20:12:56.671727 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-22 20:12:56.671737 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-22 20:12:56.671748 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-22 20:12:56.671759 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-22 20:12:56.671776 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-22 20:12:56.671793 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-22 20:12:56.671804 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 20:12:56.671814 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 20:12:56.671825 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 20:12:56.671835 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-22 20:12:56.671857 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-22 20:12:56.671868 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-22 20:12:56.671878 | orchestrator | 2025-06-22 20:12:56.671889 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-06-22 20:12:56.671899 | orchestrator | Sunday 22 June 2025 20:11:05 +0000 (0:00:09.889) 0:01:20.304 *********** 2025-06-22 20:12:56.671925 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:12:56.671936 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:12:56.671946 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:12:56.671956 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:12:56.671966 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:12:56.671975 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:12:56.671985 | orchestrator | 2025-06-22 20:12:56.671995 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-06-22 20:12:56.672005 | orchestrator | Sunday 22 June 2025 20:11:09 +0000 (0:00:04.327) 0:01:24.631 *********** 2025-06-22 20:12:56.672015 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-22 20:12:56.672026 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 20:12:56.672036 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:12:56.672059 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-22 20:12:56.672071 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 20:12:56.672081 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:12:56.672092 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-22 20:12:56.672103 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-22 20:12:56.672114 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-22 20:12:56.672133 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-22 20:12:56.672144 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:12:56.672154 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:12:56.672170 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-06-22 20:12:56.672181 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 20:12:56.672192 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:12:56.672202 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-06-22 20:12:56.672213 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-06-22 20:12:56.672229 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:12:56.672239 | orchestrator | 2025-06-22 20:12:56.672249 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-06-22 20:12:56.672259 | orchestrator | Sunday 22 June 2025 20:11:11 +0000 (0:00:01.942) 0:01:26.573 *********** 2025-06-22 20:12:56.672269 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:12:56.672279 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:12:56.672289 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:12:56.672298 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:12:56.672308 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:12:56.672318 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:12:56.672327 | orchestrator | 2025-06-22 20:12:56.672337 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-06-22 20:12:56.672347 | orchestrator | Sunday 22 June 2025 20:11:12 +0000 (0:00:00.919) 0:01:27.492 *********** 2025-06-22 20:12:56.672364 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-22 20:12:56.672375 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-22 20:12:56.672386 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-22 20:12:56.672397 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-22 20:12:56.672413 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-06-22 20:12:56.672430 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-06-22 20:12:56.672441 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 20:12:56.672451 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 20:12:56.672462 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-22 20:12:56.672482 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-22 20:12:56.672498 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-06-22 20:12:56.672509 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 20:12:56.672519 | orchestrator | 2025-06-22 20:12:56.672529 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-06-22 20:12:56.672539 | orchestrator | Sunday 22 June 2025 20:11:15 +0000 (0:00:02.817) 0:01:30.310 *********** 2025-06-22 20:12:56.672550 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:12:56.672560 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:12:56.672569 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:12:56.672579 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:12:56.672589 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:12:56.672598 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:12:56.672608 | orchestrator | 2025-06-22 20:12:56.672618 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-06-22 20:12:56.672628 | orchestrator | Sunday 22 June 2025 20:11:16 +0000 (0:00:00.937) 0:01:31.247 *********** 2025-06-22 20:12:56.672638 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:12:56.672647 | orchestrator | 2025-06-22 20:12:56.672657 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-06-22 20:12:56.672667 | orchestrator | Sunday 22 June 2025 20:11:18 +0000 (0:00:01.922) 0:01:33.170 *********** 2025-06-22 20:12:56.672677 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:12:56.672686 | orchestrator | 2025-06-22 20:12:56.672696 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-06-22 20:12:56.672706 | orchestrator | Sunday 22 June 2025 20:11:20 +0000 (0:00:02.051) 0:01:35.222 *********** 2025-06-22 20:12:56.672716 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:12:56.672732 | orchestrator | 2025-06-22 20:12:56.672742 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-22 20:12:56.672771 | orchestrator | Sunday 22 June 2025 20:11:39 +0000 (0:00:19.130) 0:01:54.353 *********** 2025-06-22 20:12:56.672788 | orchestrator | 2025-06-22 20:12:56.672808 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-22 20:12:56.672832 | orchestrator | Sunday 22 June 2025 20:11:39 +0000 (0:00:00.066) 0:01:54.420 *********** 2025-06-22 20:12:56.672849 | orchestrator | 2025-06-22 20:12:56.672864 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-22 20:12:56.672879 | orchestrator | Sunday 22 June 2025 20:11:39 +0000 (0:00:00.064) 0:01:54.484 *********** 2025-06-22 20:12:56.672895 | orchestrator | 2025-06-22 20:12:56.672973 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-22 20:12:56.672992 | orchestrator | Sunday 22 June 2025 20:11:39 +0000 (0:00:00.068) 0:01:54.553 *********** 2025-06-22 20:12:56.673006 | orchestrator | 2025-06-22 20:12:56.673016 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-22 20:12:56.673026 | orchestrator | Sunday 22 June 2025 20:11:39 +0000 (0:00:00.064) 0:01:54.617 *********** 2025-06-22 20:12:56.673035 | orchestrator | 2025-06-22 20:12:56.673045 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-06-22 20:12:56.673055 | orchestrator | Sunday 22 June 2025 20:11:39 +0000 (0:00:00.081) 0:01:54.699 *********** 2025-06-22 20:12:56.673065 | orchestrator | 2025-06-22 20:12:56.673074 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-06-22 20:12:56.673084 | orchestrator | Sunday 22 June 2025 20:11:39 +0000 (0:00:00.061) 0:01:54.760 *********** 2025-06-22 20:12:56.673094 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:12:56.673103 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:12:56.673113 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:12:56.673123 | orchestrator | 2025-06-22 20:12:56.673132 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-06-22 20:12:56.673142 | orchestrator | Sunday 22 June 2025 20:12:02 +0000 (0:00:22.628) 0:02:17.388 *********** 2025-06-22 20:12:56.673152 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:12:56.673162 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:12:56.673172 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:12:56.673181 | orchestrator | 2025-06-22 20:12:56.673191 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-06-22 20:12:56.673201 | orchestrator | Sunday 22 June 2025 20:12:13 +0000 (0:00:11.515) 0:02:28.904 *********** 2025-06-22 20:12:56.673210 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:12:56.673220 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:12:56.673229 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:12:56.673239 | orchestrator | 2025-06-22 20:12:56.673249 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-06-22 20:12:56.673259 | orchestrator | Sunday 22 June 2025 20:12:50 +0000 (0:00:36.463) 0:03:05.367 *********** 2025-06-22 20:12:56.673269 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:12:56.673279 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:12:56.673289 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:12:56.673298 | orchestrator | 2025-06-22 20:12:56.673308 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-06-22 20:12:56.673319 | orchestrator | Sunday 22 June 2025 20:12:55 +0000 (0:00:05.393) 0:03:10.760 *********** 2025-06-22 20:12:56.673329 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:12:56.673338 | orchestrator | 2025-06-22 20:12:56.673349 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 20:12:56.673434 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-06-22 20:12:56.673448 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-06-22 20:12:56.673468 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-06-22 20:12:56.673478 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-06-22 20:12:56.673488 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-06-22 20:12:56.673498 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-06-22 20:12:56.673506 | orchestrator | 2025-06-22 20:12:56.673514 | orchestrator | 2025-06-22 20:12:56.673522 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 20:12:56.673530 | orchestrator | Sunday 22 June 2025 20:12:56 +0000 (0:00:00.594) 0:03:11.355 *********** 2025-06-22 20:12:56.673538 | orchestrator | =============================================================================== 2025-06-22 20:12:56.673546 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 36.46s 2025-06-22 20:12:56.673554 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 22.63s 2025-06-22 20:12:56.673562 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 19.13s 2025-06-22 20:12:56.673570 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 11.52s 2025-06-22 20:12:56.673578 | orchestrator | cinder : Copying over cinder.conf --------------------------------------- 9.89s 2025-06-22 20:12:56.673586 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 7.68s 2025-06-22 20:12:56.673593 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.56s 2025-06-22 20:12:56.673602 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 6.28s 2025-06-22 20:12:56.673609 | orchestrator | cinder : Restart cinder-backup container -------------------------------- 5.39s 2025-06-22 20:12:56.673617 | orchestrator | cinder : Generating 'hostnqn' file for cinder_volume -------------------- 4.33s 2025-06-22 20:12:56.673625 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.15s 2025-06-22 20:12:56.673633 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.78s 2025-06-22 20:12:56.673641 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.52s 2025-06-22 20:12:56.673649 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.46s 2025-06-22 20:12:56.673657 | orchestrator | cinder : Copying over config.json files for services -------------------- 3.38s 2025-06-22 20:12:56.673665 | orchestrator | cinder : Copy over Ceph keyring files for cinder-volume ----------------- 3.30s 2025-06-22 20:12:56.673672 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.24s 2025-06-22 20:12:56.673680 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 3.12s 2025-06-22 20:12:56.673688 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 2.95s 2025-06-22 20:12:56.673696 | orchestrator | cinder : Ensuring cinder service ceph config subdirs exists ------------- 2.95s 2025-06-22 20:12:56.673704 | orchestrator | 2025-06-22 20:12:56 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:12:59.717620 | orchestrator | 2025-06-22 20:12:59 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:12:59.718300 | orchestrator | 2025-06-22 20:12:59 | INFO  | Task bdf0a33e-3cf8-4e8a-908e-c467b0db543b is in state STARTED 2025-06-22 20:12:59.719243 | orchestrator | 2025-06-22 20:12:59 | INFO  | Task b4f7c02c-ce38-41e1-98b8-c3ec8dd85194 is in state STARTED 2025-06-22 20:12:59.720209 | orchestrator | 2025-06-22 20:12:59 | INFO  | Task 37d1cf51-3dc9-45cc-a41b-c6b2b5845911 is in state STARTED 2025-06-22 20:12:59.720266 | orchestrator | 2025-06-22 20:12:59 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:13:02.793706 | orchestrator | 2025-06-22 20:13:02 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:13:02.796768 | orchestrator | 2025-06-22 20:13:02 | INFO  | Task bdf0a33e-3cf8-4e8a-908e-c467b0db543b is in state STARTED 2025-06-22 20:13:02.798144 | orchestrator | 2025-06-22 20:13:02 | INFO  | Task b4f7c02c-ce38-41e1-98b8-c3ec8dd85194 is in state STARTED 2025-06-22 20:13:02.800543 | orchestrator | 2025-06-22 20:13:02 | INFO  | Task 37d1cf51-3dc9-45cc-a41b-c6b2b5845911 is in state STARTED 2025-06-22 20:13:02.800592 | orchestrator | 2025-06-22 20:13:02 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:13:05.848271 | orchestrator | 2025-06-22 20:13:05 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:13:05.849127 | orchestrator | 2025-06-22 20:13:05 | INFO  | Task bdf0a33e-3cf8-4e8a-908e-c467b0db543b is in state STARTED 2025-06-22 20:13:05.850542 | orchestrator | 2025-06-22 20:13:05 | INFO  | Task b4f7c02c-ce38-41e1-98b8-c3ec8dd85194 is in state STARTED 2025-06-22 20:13:05.851373 | orchestrator | 2025-06-22 20:13:05 | INFO  | Task 37d1cf51-3dc9-45cc-a41b-c6b2b5845911 is in state STARTED 2025-06-22 20:13:05.851397 | orchestrator | 2025-06-22 20:13:05 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:13:08.901363 | orchestrator | 2025-06-22 20:13:08 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:13:08.903441 | orchestrator | 2025-06-22 20:13:08 | INFO  | Task bdf0a33e-3cf8-4e8a-908e-c467b0db543b is in state STARTED 2025-06-22 20:13:08.905519 | orchestrator | 2025-06-22 20:13:08 | INFO  | Task b4f7c02c-ce38-41e1-98b8-c3ec8dd85194 is in state STARTED 2025-06-22 20:13:08.907338 | orchestrator | 2025-06-22 20:13:08 | INFO  | Task 37d1cf51-3dc9-45cc-a41b-c6b2b5845911 is in state STARTED 2025-06-22 20:13:08.907573 | orchestrator | 2025-06-22 20:13:08 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:13:11.943116 | orchestrator | 2025-06-22 20:13:11 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:13:11.944017 | orchestrator | 2025-06-22 20:13:11 | INFO  | Task bdf0a33e-3cf8-4e8a-908e-c467b0db543b is in state STARTED 2025-06-22 20:13:11.945320 | orchestrator | 2025-06-22 20:13:11 | INFO  | Task b4f7c02c-ce38-41e1-98b8-c3ec8dd85194 is in state STARTED 2025-06-22 20:13:11.948351 | orchestrator | 2025-06-22 20:13:11 | INFO  | Task 37d1cf51-3dc9-45cc-a41b-c6b2b5845911 is in state STARTED 2025-06-22 20:13:11.949695 | orchestrator | 2025-06-22 20:13:11 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:13:14.993601 | orchestrator | 2025-06-22 20:13:14 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:13:14.994798 | orchestrator | 2025-06-22 20:13:14 | INFO  | Task bdf0a33e-3cf8-4e8a-908e-c467b0db543b is in state STARTED 2025-06-22 20:13:14.994832 | orchestrator | 2025-06-22 20:13:14 | INFO  | Task b4f7c02c-ce38-41e1-98b8-c3ec8dd85194 is in state STARTED 2025-06-22 20:13:14.994844 | orchestrator | 2025-06-22 20:13:14 | INFO  | Task 37d1cf51-3dc9-45cc-a41b-c6b2b5845911 is in state STARTED 2025-06-22 20:13:14.994856 | orchestrator | 2025-06-22 20:13:14 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:13:18.044803 | orchestrator | 2025-06-22 20:13:18 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:13:18.047544 | orchestrator | 2025-06-22 20:13:18 | INFO  | Task bdf0a33e-3cf8-4e8a-908e-c467b0db543b is in state STARTED 2025-06-22 20:13:18.050562 | orchestrator | 2025-06-22 20:13:18.050651 | orchestrator | 2025-06-22 20:13:18.050674 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 20:13:18.050692 | orchestrator | 2025-06-22 20:13:18.050707 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 20:13:18.050722 | orchestrator | Sunday 22 June 2025 20:12:25 +0000 (0:00:00.260) 0:00:00.260 *********** 2025-06-22 20:13:18.050737 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:13:18.050753 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:13:18.050768 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:13:18.050783 | orchestrator | 2025-06-22 20:13:18.050797 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 20:13:18.050813 | orchestrator | Sunday 22 June 2025 20:12:25 +0000 (0:00:00.287) 0:00:00.547 *********** 2025-06-22 20:13:18.050827 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-06-22 20:13:18.050842 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-06-22 20:13:18.050856 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-06-22 20:13:18.050872 | orchestrator | 2025-06-22 20:13:18.050887 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-06-22 20:13:18.050941 | orchestrator | 2025-06-22 20:13:18.050956 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-06-22 20:13:18.050970 | orchestrator | Sunday 22 June 2025 20:12:26 +0000 (0:00:00.438) 0:00:00.985 *********** 2025-06-22 20:13:18.050984 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:13:18.051000 | orchestrator | 2025-06-22 20:13:18.051014 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-06-22 20:13:18.051029 | orchestrator | Sunday 22 June 2025 20:12:26 +0000 (0:00:00.560) 0:00:01.546 *********** 2025-06-22 20:13:18.051045 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-06-22 20:13:18.051060 | orchestrator | 2025-06-22 20:13:18.051074 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-06-22 20:13:18.051089 | orchestrator | Sunday 22 June 2025 20:12:30 +0000 (0:00:03.356) 0:00:04.903 *********** 2025-06-22 20:13:18.051103 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-06-22 20:13:18.051118 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-06-22 20:13:18.051133 | orchestrator | 2025-06-22 20:13:18.051148 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-06-22 20:13:18.051163 | orchestrator | Sunday 22 June 2025 20:12:36 +0000 (0:00:06.657) 0:00:11.560 *********** 2025-06-22 20:13:18.051180 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-22 20:13:18.051195 | orchestrator | 2025-06-22 20:13:18.051211 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-06-22 20:13:18.051227 | orchestrator | Sunday 22 June 2025 20:12:40 +0000 (0:00:03.253) 0:00:14.814 *********** 2025-06-22 20:13:18.051243 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-22 20:13:18.051258 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-06-22 20:13:18.051274 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-06-22 20:13:18.051290 | orchestrator | 2025-06-22 20:13:18.051306 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-06-22 20:13:18.051321 | orchestrator | Sunday 22 June 2025 20:12:47 +0000 (0:00:07.124) 0:00:21.939 *********** 2025-06-22 20:13:18.051337 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-22 20:13:18.051351 | orchestrator | 2025-06-22 20:13:18.051365 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-06-22 20:13:18.051380 | orchestrator | Sunday 22 June 2025 20:12:50 +0000 (0:00:03.064) 0:00:25.003 *********** 2025-06-22 20:13:18.051396 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-06-22 20:13:18.051410 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-06-22 20:13:18.051443 | orchestrator | 2025-06-22 20:13:18.051460 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-06-22 20:13:18.051475 | orchestrator | Sunday 22 June 2025 20:12:57 +0000 (0:00:07.140) 0:00:32.143 *********** 2025-06-22 20:13:18.051490 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-06-22 20:13:18.051506 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-06-22 20:13:18.051521 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-06-22 20:13:18.051535 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-06-22 20:13:18.051550 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-06-22 20:13:18.051565 | orchestrator | 2025-06-22 20:13:18.051580 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-06-22 20:13:18.051594 | orchestrator | Sunday 22 June 2025 20:13:10 +0000 (0:00:13.532) 0:00:45.676 *********** 2025-06-22 20:13:18.051608 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:13:18.051622 | orchestrator | 2025-06-22 20:13:18.051636 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-06-22 20:13:18.051650 | orchestrator | Sunday 22 June 2025 20:13:11 +0000 (0:00:00.574) 0:00:46.250 *********** 2025-06-22 20:13:18.051666 | orchestrator | An exception occurred during task execution. To see the full traceback, use -vvv. The error was: keystoneauth1.exceptions.catalog.EndpointNotFound: internal endpoint for compute service in RegionOne region not found 2025-06-22 20:13:18.051718 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"action": "os_nova_flavor", "changed": false, "module_stderr": "Traceback (most recent call last):\n File \"/tmp/ansible-tmp-1750623192.7225506-6441-157951059002329/AnsiballZ_compute_flavor.py\", line 107, in \n _ansiballz_main()\n File \"/tmp/ansible-tmp-1750623192.7225506-6441-157951059002329/AnsiballZ_compute_flavor.py\", line 99, in _ansiballz_main\n invoke_module(zipped_mod, temp_path, ANSIBALLZ_PARAMS)\n File \"/tmp/ansible-tmp-1750623192.7225506-6441-157951059002329/AnsiballZ_compute_flavor.py\", line 47, in invoke_module\n runpy.run_module(mod_name='ansible_collections.openstack.cloud.plugins.modules.compute_flavor', init_globals=dict(_module_fqn='ansible_collections.openstack.cloud.plugins.modules.compute_flavor', _modlib_path=modlib_path),\n File \"\", line 226, in run_module\n File \"\", line 98, in _run_module_code\n File \"\", line 88, in _run_code\n File \"/tmp/ansible_os_nova_flavor_payload_4f4rhfm1/ansible_os_nova_flavor_payload.zip/ansible_collections/openstack/cloud/plugins/modules/compute_flavor.py\", line 367, in \n File \"/tmp/ansible_os_nova_flavor_payload_4f4rhfm1/ansible_os_nova_flavor_payload.zip/ansible_collections/openstack/cloud/plugins/modules/compute_flavor.py\", line 363, in main\n File \"/tmp/ansible_os_nova_flavor_payload_4f4rhfm1/ansible_os_nova_flavor_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", line 417, in __call__\n File \"/tmp/ansible_os_nova_flavor_payload_4f4rhfm1/ansible_os_nova_flavor_payload.zip/ansible_collections/openstack/cloud/plugins/modules/compute_flavor.py\", line 220, in run\n File \"/opt/ansible/lib/python3.11/site-packages/openstack/service_description.py\", line 88, in __get__\n proxy = self._make_proxy(instance)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.11/site-packages/openstack/service_description.py\", line 286, in _make_proxy\n found_version = temp_adapter.get_api_major_version()\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.11/site-packages/keystoneauth1/adapter.py\", line 352, in get_api_major_version\n return self.session.get_api_major_version(auth or self.auth, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.11/site-packages/keystoneauth1/session.py\", line 1289, in get_api_major_version\n return auth.get_api_major_version(self, **kwargs)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.11/site-packages/keystoneauth1/identity/base.py\", line 497, in get_api_major_version\n data = get_endpoint_data(discover_versions=discover_versions)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.11/site-packages/keystoneauth1/identity/base.py\", line 272, in get_endpoint_data\n endpoint_data = service_catalog.endpoint_data_for(\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/opt/ansible/lib/python3.11/site-packages/keystoneauth1/access/service_catalog.py\", line 459, in endpoint_data_for\n raise exceptions.EndpointNotFound(msg)\nkeystoneauth1.exceptions.catalog.EndpointNotFound: internal endpoint for compute service in RegionOne region not found\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} 2025-06-22 20:13:18.051754 | orchestrator | 2025-06-22 20:13:18.051769 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 20:13:18.051786 | orchestrator | testbed-node-0 : ok=11  changed=5  unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-06-22 20:13:18.051801 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 20:13:18.051818 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 20:13:18.051834 | orchestrator | 2025-06-22 20:13:18.051849 | orchestrator | 2025-06-22 20:13:18.051864 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 20:13:18.051880 | orchestrator | Sunday 22 June 2025 20:13:14 +0000 (0:00:03.203) 0:00:49.453 *********** 2025-06-22 20:13:18.051934 | orchestrator | =============================================================================== 2025-06-22 20:13:18.051951 | orchestrator | octavia : Adding octavia related roles --------------------------------- 13.53s 2025-06-22 20:13:18.051966 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.14s 2025-06-22 20:13:18.051981 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 7.12s 2025-06-22 20:13:18.051997 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.66s 2025-06-22 20:13:18.052011 | orchestrator | service-ks-register : octavia | Creating services ----------------------- 3.36s 2025-06-22 20:13:18.052026 | orchestrator | service-ks-register : octavia | Creating projects ----------------------- 3.25s 2025-06-22 20:13:18.052040 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 3.20s 2025-06-22 20:13:18.052055 | orchestrator | service-ks-register : octavia | Creating roles -------------------------- 3.06s 2025-06-22 20:13:18.052070 | orchestrator | octavia : include_tasks ------------------------------------------------- 0.57s 2025-06-22 20:13:18.052084 | orchestrator | octavia : include_tasks ------------------------------------------------- 0.56s 2025-06-22 20:13:18.052099 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.44s 2025-06-22 20:13:18.052113 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.29s 2025-06-22 20:13:18.052127 | orchestrator | 2025-06-22 20:13:18 | INFO  | Task b4f7c02c-ce38-41e1-98b8-c3ec8dd85194 is in state SUCCESS 2025-06-22 20:13:18.052545 | orchestrator | 2025-06-22 20:13:18 | INFO  | Task 37d1cf51-3dc9-45cc-a41b-c6b2b5845911 is in state STARTED 2025-06-22 20:13:18.052578 | orchestrator | 2025-06-22 20:13:18 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:13:21.103184 | orchestrator | 2025-06-22 20:13:21 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:13:21.103993 | orchestrator | 2025-06-22 20:13:21 | INFO  | Task bdf0a33e-3cf8-4e8a-908e-c467b0db543b is in state STARTED 2025-06-22 20:13:21.105129 | orchestrator | 2025-06-22 20:13:21 | INFO  | Task 37d1cf51-3dc9-45cc-a41b-c6b2b5845911 is in state STARTED 2025-06-22 20:13:21.105154 | orchestrator | 2025-06-22 20:13:21 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:13:24.143307 | orchestrator | 2025-06-22 20:13:24 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:13:24.143534 | orchestrator | 2025-06-22 20:13:24 | INFO  | Task bdf0a33e-3cf8-4e8a-908e-c467b0db543b is in state STARTED 2025-06-22 20:13:24.144332 | orchestrator | 2025-06-22 20:13:24 | INFO  | Task 37d1cf51-3dc9-45cc-a41b-c6b2b5845911 is in state STARTED 2025-06-22 20:13:24.144360 | orchestrator | 2025-06-22 20:13:24 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:13:27.182582 | orchestrator | 2025-06-22 20:13:27 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:13:27.184236 | orchestrator | 2025-06-22 20:13:27 | INFO  | Task bdf0a33e-3cf8-4e8a-908e-c467b0db543b is in state STARTED 2025-06-22 20:13:27.185791 | orchestrator | 2025-06-22 20:13:27 | INFO  | Task 37d1cf51-3dc9-45cc-a41b-c6b2b5845911 is in state STARTED 2025-06-22 20:13:27.185839 | orchestrator | 2025-06-22 20:13:27 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:13:30.234772 | orchestrator | 2025-06-22 20:13:30 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:13:30.235394 | orchestrator | 2025-06-22 20:13:30 | INFO  | Task bdf0a33e-3cf8-4e8a-908e-c467b0db543b is in state STARTED 2025-06-22 20:13:30.237121 | orchestrator | 2025-06-22 20:13:30 | INFO  | Task 37d1cf51-3dc9-45cc-a41b-c6b2b5845911 is in state STARTED 2025-06-22 20:13:30.237546 | orchestrator | 2025-06-22 20:13:30 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:13:33.273039 | orchestrator | 2025-06-22 20:13:33 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:13:33.275162 | orchestrator | 2025-06-22 20:13:33 | INFO  | Task bdf0a33e-3cf8-4e8a-908e-c467b0db543b is in state STARTED 2025-06-22 20:13:33.276798 | orchestrator | 2025-06-22 20:13:33 | INFO  | Task 37d1cf51-3dc9-45cc-a41b-c6b2b5845911 is in state STARTED 2025-06-22 20:13:33.276828 | orchestrator | 2025-06-22 20:13:33 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:13:36.323874 | orchestrator | 2025-06-22 20:13:36 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:13:36.325400 | orchestrator | 2025-06-22 20:13:36 | INFO  | Task bdf0a33e-3cf8-4e8a-908e-c467b0db543b is in state STARTED 2025-06-22 20:13:36.326702 | orchestrator | 2025-06-22 20:13:36 | INFO  | Task 37d1cf51-3dc9-45cc-a41b-c6b2b5845911 is in state STARTED 2025-06-22 20:13:36.326743 | orchestrator | 2025-06-22 20:13:36 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:13:39.377745 | orchestrator | 2025-06-22 20:13:39 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:13:39.379370 | orchestrator | 2025-06-22 20:13:39 | INFO  | Task bdf0a33e-3cf8-4e8a-908e-c467b0db543b is in state STARTED 2025-06-22 20:13:39.380991 | orchestrator | 2025-06-22 20:13:39 | INFO  | Task 37d1cf51-3dc9-45cc-a41b-c6b2b5845911 is in state STARTED 2025-06-22 20:13:39.381026 | orchestrator | 2025-06-22 20:13:39 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:13:42.412538 | orchestrator | 2025-06-22 20:13:42 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:13:42.413574 | orchestrator | 2025-06-22 20:13:42 | INFO  | Task bdf0a33e-3cf8-4e8a-908e-c467b0db543b is in state STARTED 2025-06-22 20:13:42.416090 | orchestrator | 2025-06-22 20:13:42 | INFO  | Task 37d1cf51-3dc9-45cc-a41b-c6b2b5845911 is in state STARTED 2025-06-22 20:13:42.416202 | orchestrator | 2025-06-22 20:13:42 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:13:45.460521 | orchestrator | 2025-06-22 20:13:45 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:13:45.461935 | orchestrator | 2025-06-22 20:13:45 | INFO  | Task bdf0a33e-3cf8-4e8a-908e-c467b0db543b is in state STARTED 2025-06-22 20:13:45.463522 | orchestrator | 2025-06-22 20:13:45 | INFO  | Task 37d1cf51-3dc9-45cc-a41b-c6b2b5845911 is in state STARTED 2025-06-22 20:13:45.463549 | orchestrator | 2025-06-22 20:13:45 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:13:48.513253 | orchestrator | 2025-06-22 20:13:48 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:13:48.513713 | orchestrator | 2025-06-22 20:13:48 | INFO  | Task bdf0a33e-3cf8-4e8a-908e-c467b0db543b is in state STARTED 2025-06-22 20:13:48.515431 | orchestrator | 2025-06-22 20:13:48 | INFO  | Task 37d1cf51-3dc9-45cc-a41b-c6b2b5845911 is in state STARTED 2025-06-22 20:13:48.515469 | orchestrator | 2025-06-22 20:13:48 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:13:51.561617 | orchestrator | 2025-06-22 20:13:51 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:13:51.562803 | orchestrator | 2025-06-22 20:13:51 | INFO  | Task bdf0a33e-3cf8-4e8a-908e-c467b0db543b is in state STARTED 2025-06-22 20:13:51.565477 | orchestrator | 2025-06-22 20:13:51 | INFO  | Task 37d1cf51-3dc9-45cc-a41b-c6b2b5845911 is in state STARTED 2025-06-22 20:13:51.565503 | orchestrator | 2025-06-22 20:13:51 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:13:54.621205 | orchestrator | 2025-06-22 20:13:54 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:13:54.622129 | orchestrator | 2025-06-22 20:13:54 | INFO  | Task bdf0a33e-3cf8-4e8a-908e-c467b0db543b is in state STARTED 2025-06-22 20:13:54.622684 | orchestrator | 2025-06-22 20:13:54 | INFO  | Task 37d1cf51-3dc9-45cc-a41b-c6b2b5845911 is in state STARTED 2025-06-22 20:13:54.622708 | orchestrator | 2025-06-22 20:13:54 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:13:57.663629 | orchestrator | 2025-06-22 20:13:57 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:13:57.665859 | orchestrator | 2025-06-22 20:13:57 | INFO  | Task bdf0a33e-3cf8-4e8a-908e-c467b0db543b is in state STARTED 2025-06-22 20:13:57.668034 | orchestrator | 2025-06-22 20:13:57 | INFO  | Task 37d1cf51-3dc9-45cc-a41b-c6b2b5845911 is in state STARTED 2025-06-22 20:13:57.668077 | orchestrator | 2025-06-22 20:13:57 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:14:00.709828 | orchestrator | 2025-06-22 20:14:00 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:14:00.711638 | orchestrator | 2025-06-22 20:14:00 | INFO  | Task bdf0a33e-3cf8-4e8a-908e-c467b0db543b is in state STARTED 2025-06-22 20:14:00.714060 | orchestrator | 2025-06-22 20:14:00 | INFO  | Task 37d1cf51-3dc9-45cc-a41b-c6b2b5845911 is in state STARTED 2025-06-22 20:14:00.714364 | orchestrator | 2025-06-22 20:14:00 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:14:03.768426 | orchestrator | 2025-06-22 20:14:03 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:14:03.773827 | orchestrator | 2025-06-22 20:14:03 | INFO  | Task bdf0a33e-3cf8-4e8a-908e-c467b0db543b is in state STARTED 2025-06-22 20:14:03.776679 | orchestrator | 2025-06-22 20:14:03 | INFO  | Task 37d1cf51-3dc9-45cc-a41b-c6b2b5845911 is in state STARTED 2025-06-22 20:14:03.776756 | orchestrator | 2025-06-22 20:14:03 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:14:06.826236 | orchestrator | 2025-06-22 20:14:06 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:14:06.830939 | orchestrator | 2025-06-22 20:14:06 | INFO  | Task bdf0a33e-3cf8-4e8a-908e-c467b0db543b is in state STARTED 2025-06-22 20:14:06.833194 | orchestrator | 2025-06-22 20:14:06 | INFO  | Task 37d1cf51-3dc9-45cc-a41b-c6b2b5845911 is in state STARTED 2025-06-22 20:14:06.833281 | orchestrator | 2025-06-22 20:14:06 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:14:09.884827 | orchestrator | 2025-06-22 20:14:09 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:14:09.887054 | orchestrator | 2025-06-22 20:14:09 | INFO  | Task bdf0a33e-3cf8-4e8a-908e-c467b0db543b is in state STARTED 2025-06-22 20:14:09.888520 | orchestrator | 2025-06-22 20:14:09 | INFO  | Task 37d1cf51-3dc9-45cc-a41b-c6b2b5845911 is in state STARTED 2025-06-22 20:14:09.888556 | orchestrator | 2025-06-22 20:14:09 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:14:12.935730 | orchestrator | 2025-06-22 20:14:12 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:14:12.937224 | orchestrator | 2025-06-22 20:14:12 | INFO  | Task bdf0a33e-3cf8-4e8a-908e-c467b0db543b is in state STARTED 2025-06-22 20:14:12.941898 | orchestrator | 2025-06-22 20:14:12 | INFO  | Task 37d1cf51-3dc9-45cc-a41b-c6b2b5845911 is in state STARTED 2025-06-22 20:14:12.941968 | orchestrator | 2025-06-22 20:14:12 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:14:15.995907 | orchestrator | 2025-06-22 20:14:15 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:14:15.999297 | orchestrator | 2025-06-22 20:14:15 | INFO  | Task bdf0a33e-3cf8-4e8a-908e-c467b0db543b is in state STARTED 2025-06-22 20:14:16.000521 | orchestrator | 2025-06-22 20:14:15 | INFO  | Task 37d1cf51-3dc9-45cc-a41b-c6b2b5845911 is in state STARTED 2025-06-22 20:14:16.000705 | orchestrator | 2025-06-22 20:14:15 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:14:19.049310 | orchestrator | 2025-06-22 20:14:19 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:14:19.051455 | orchestrator | 2025-06-22 20:14:19 | INFO  | Task bdf0a33e-3cf8-4e8a-908e-c467b0db543b is in state STARTED 2025-06-22 20:14:19.052499 | orchestrator | 2025-06-22 20:14:19 | INFO  | Task 37d1cf51-3dc9-45cc-a41b-c6b2b5845911 is in state STARTED 2025-06-22 20:14:19.052527 | orchestrator | 2025-06-22 20:14:19 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:14:22.097626 | orchestrator | 2025-06-22 20:14:22 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:14:22.098585 | orchestrator | 2025-06-22 20:14:22 | INFO  | Task bdf0a33e-3cf8-4e8a-908e-c467b0db543b is in state STARTED 2025-06-22 20:14:22.101026 | orchestrator | 2025-06-22 20:14:22 | INFO  | Task 37d1cf51-3dc9-45cc-a41b-c6b2b5845911 is in state STARTED 2025-06-22 20:14:22.101195 | orchestrator | 2025-06-22 20:14:22 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:14:25.141389 | orchestrator | 2025-06-22 20:14:25 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:14:25.143201 | orchestrator | 2025-06-22 20:14:25 | INFO  | Task bdf0a33e-3cf8-4e8a-908e-c467b0db543b is in state STARTED 2025-06-22 20:14:25.144943 | orchestrator | 2025-06-22 20:14:25 | INFO  | Task 37d1cf51-3dc9-45cc-a41b-c6b2b5845911 is in state STARTED 2025-06-22 20:14:25.145027 | orchestrator | 2025-06-22 20:14:25 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:14:28.200395 | orchestrator | 2025-06-22 20:14:28 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:14:28.202141 | orchestrator | 2025-06-22 20:14:28 | INFO  | Task bdf0a33e-3cf8-4e8a-908e-c467b0db543b is in state STARTED 2025-06-22 20:14:28.204407 | orchestrator | 2025-06-22 20:14:28 | INFO  | Task 37d1cf51-3dc9-45cc-a41b-c6b2b5845911 is in state STARTED 2025-06-22 20:14:28.204517 | orchestrator | 2025-06-22 20:14:28 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:14:31.253124 | orchestrator | 2025-06-22 20:14:31 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:14:31.256956 | orchestrator | 2025-06-22 20:14:31 | INFO  | Task bdf0a33e-3cf8-4e8a-908e-c467b0db543b is in state STARTED 2025-06-22 20:14:31.260152 | orchestrator | 2025-06-22 20:14:31 | INFO  | Task 37d1cf51-3dc9-45cc-a41b-c6b2b5845911 is in state STARTED 2025-06-22 20:14:31.260206 | orchestrator | 2025-06-22 20:14:31 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:14:34.302356 | orchestrator | 2025-06-22 20:14:34 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:14:34.303336 | orchestrator | 2025-06-22 20:14:34 | INFO  | Task bdf0a33e-3cf8-4e8a-908e-c467b0db543b is in state SUCCESS 2025-06-22 20:14:34.305084 | orchestrator | 2025-06-22 20:14:34 | INFO  | Task 37d1cf51-3dc9-45cc-a41b-c6b2b5845911 is in state STARTED 2025-06-22 20:14:34.305168 | orchestrator | 2025-06-22 20:14:34 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:14:37.357930 | orchestrator | 2025-06-22 20:14:37 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:14:37.359169 | orchestrator | 2025-06-22 20:14:37 | INFO  | Task 37d1cf51-3dc9-45cc-a41b-c6b2b5845911 is in state STARTED 2025-06-22 20:14:37.359499 | orchestrator | 2025-06-22 20:14:37 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:14:40.406130 | orchestrator | 2025-06-22 20:14:40 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:14:40.407474 | orchestrator | 2025-06-22 20:14:40 | INFO  | Task 37d1cf51-3dc9-45cc-a41b-c6b2b5845911 is in state STARTED 2025-06-22 20:14:40.407528 | orchestrator | 2025-06-22 20:14:40 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:14:43.452229 | orchestrator | 2025-06-22 20:14:43 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:14:43.454280 | orchestrator | 2025-06-22 20:14:43 | INFO  | Task 37d1cf51-3dc9-45cc-a41b-c6b2b5845911 is in state STARTED 2025-06-22 20:14:43.454393 | orchestrator | 2025-06-22 20:14:43 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:14:46.505310 | orchestrator | 2025-06-22 20:14:46 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:14:46.506869 | orchestrator | 2025-06-22 20:14:46 | INFO  | Task 37d1cf51-3dc9-45cc-a41b-c6b2b5845911 is in state STARTED 2025-06-22 20:14:46.506927 | orchestrator | 2025-06-22 20:14:46 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:14:49.570393 | orchestrator | 2025-06-22 20:14:49 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:14:49.572656 | orchestrator | 2025-06-22 20:14:49 | INFO  | Task 37d1cf51-3dc9-45cc-a41b-c6b2b5845911 is in state STARTED 2025-06-22 20:14:49.573006 | orchestrator | 2025-06-22 20:14:49 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:14:52.619454 | orchestrator | 2025-06-22 20:14:52 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:14:52.619566 | orchestrator | 2025-06-22 20:14:52 | INFO  | Task 37d1cf51-3dc9-45cc-a41b-c6b2b5845911 is in state STARTED 2025-06-22 20:14:52.619576 | orchestrator | 2025-06-22 20:14:52 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:14:55.661020 | orchestrator | 2025-06-22 20:14:55 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:14:55.661493 | orchestrator | 2025-06-22 20:14:55 | INFO  | Task 37d1cf51-3dc9-45cc-a41b-c6b2b5845911 is in state STARTED 2025-06-22 20:14:55.661512 | orchestrator | 2025-06-22 20:14:55 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:14:58.732457 | orchestrator | 2025-06-22 20:14:58 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:14:58.734074 | orchestrator | 2025-06-22 20:14:58 | INFO  | Task 37d1cf51-3dc9-45cc-a41b-c6b2b5845911 is in state STARTED 2025-06-22 20:14:58.734245 | orchestrator | 2025-06-22 20:14:58 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:15:01.787656 | orchestrator | 2025-06-22 20:15:01 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:15:01.789345 | orchestrator | 2025-06-22 20:15:01 | INFO  | Task 37d1cf51-3dc9-45cc-a41b-c6b2b5845911 is in state STARTED 2025-06-22 20:15:01.789392 | orchestrator | 2025-06-22 20:15:01 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:15:04.846715 | orchestrator | 2025-06-22 20:15:04 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:15:04.848919 | orchestrator | 2025-06-22 20:15:04 | INFO  | Task 37d1cf51-3dc9-45cc-a41b-c6b2b5845911 is in state STARTED 2025-06-22 20:15:04.848961 | orchestrator | 2025-06-22 20:15:04 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:15:07.890000 | orchestrator | 2025-06-22 20:15:07 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:15:07.895916 | orchestrator | 2025-06-22 20:15:07.895985 | orchestrator | 2025-06-22 20:15:07.896000 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 20:15:07.896013 | orchestrator | 2025-06-22 20:15:07.896025 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 20:15:07.896037 | orchestrator | Sunday 22 June 2025 20:12:09 +0000 (0:00:00.169) 0:00:00.170 *********** 2025-06-22 20:15:07.896048 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:15:07.896061 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:15:07.896072 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:15:07.896083 | orchestrator | 2025-06-22 20:15:07.896095 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 20:15:07.896106 | orchestrator | Sunday 22 June 2025 20:12:10 +0000 (0:00:00.293) 0:00:00.463 *********** 2025-06-22 20:15:07.896117 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-06-22 20:15:07.896230 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-06-22 20:15:07.896247 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-06-22 20:15:07.896258 | orchestrator | 2025-06-22 20:15:07.896270 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-06-22 20:15:07.896281 | orchestrator | 2025-06-22 20:15:07.896292 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-06-22 20:15:07.896303 | orchestrator | Sunday 22 June 2025 20:12:10 +0000 (0:00:00.538) 0:00:01.001 *********** 2025-06-22 20:15:07.896314 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:15:07.896326 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:15:07.896337 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:15:07.896348 | orchestrator | 2025-06-22 20:15:07.896359 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 20:15:07.896371 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 20:15:07.896408 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 20:15:07.896420 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 20:15:07.896431 | orchestrator | 2025-06-22 20:15:07.896442 | orchestrator | 2025-06-22 20:15:07.896561 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 20:15:07.896574 | orchestrator | Sunday 22 June 2025 20:14:33 +0000 (0:02:22.738) 0:02:23.740 *********** 2025-06-22 20:15:07.896587 | orchestrator | =============================================================================== 2025-06-22 20:15:07.896599 | orchestrator | Waiting for Nova public port to be UP --------------------------------- 142.74s 2025-06-22 20:15:07.896612 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.54s 2025-06-22 20:15:07.896625 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.29s 2025-06-22 20:15:07.896637 | orchestrator | 2025-06-22 20:15:07.896649 | orchestrator | 2025-06-22 20:15:07.896662 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 20:15:07.896674 | orchestrator | 2025-06-22 20:15:07.896687 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 20:15:07.896700 | orchestrator | Sunday 22 June 2025 20:13:00 +0000 (0:00:00.272) 0:00:00.272 *********** 2025-06-22 20:15:07.896712 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:15:07.896724 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:15:07.896737 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:15:07.896749 | orchestrator | 2025-06-22 20:15:07.896762 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 20:15:07.896775 | orchestrator | Sunday 22 June 2025 20:13:00 +0000 (0:00:00.302) 0:00:00.575 *********** 2025-06-22 20:15:07.896788 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-06-22 20:15:07.896801 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-06-22 20:15:07.896910 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-06-22 20:15:07.896929 | orchestrator | 2025-06-22 20:15:07.896940 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-06-22 20:15:07.896983 | orchestrator | 2025-06-22 20:15:07.896994 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-06-22 20:15:07.897006 | orchestrator | Sunday 22 June 2025 20:13:01 +0000 (0:00:00.390) 0:00:00.966 *********** 2025-06-22 20:15:07.897017 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:15:07.897028 | orchestrator | 2025-06-22 20:15:07.897039 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-06-22 20:15:07.897050 | orchestrator | Sunday 22 June 2025 20:13:01 +0000 (0:00:00.506) 0:00:01.472 *********** 2025-06-22 20:15:07.897065 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-22 20:15:07.897098 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-22 20:15:07.897123 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-22 20:15:07.897135 | orchestrator | 2025-06-22 20:15:07.897147 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-06-22 20:15:07.897158 | orchestrator | Sunday 22 June 2025 20:13:02 +0000 (0:00:00.705) 0:00:02.177 *********** 2025-06-22 20:15:07.897170 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-06-22 20:15:07.897181 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-06-22 20:15:07.897192 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-22 20:15:07.897203 | orchestrator | 2025-06-22 20:15:07.897214 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-06-22 20:15:07.897225 | orchestrator | Sunday 22 June 2025 20:13:03 +0000 (0:00:00.807) 0:00:02.985 *********** 2025-06-22 20:15:07.897236 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:15:07.897247 | orchestrator | 2025-06-22 20:15:07.897258 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-06-22 20:15:07.897269 | orchestrator | Sunday 22 June 2025 20:13:04 +0000 (0:00:00.656) 0:00:03.642 *********** 2025-06-22 20:15:07.897281 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-22 20:15:07.897293 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-22 20:15:07.897305 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-22 20:15:07.897325 | orchestrator | 2025-06-22 20:15:07.897342 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-06-22 20:15:07.897353 | orchestrator | Sunday 22 June 2025 20:13:05 +0000 (0:00:01.331) 0:00:04.974 *********** 2025-06-22 20:15:07.897365 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-22 20:15:07.897377 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-22 20:15:07.897389 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:15:07.897400 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:15:07.897412 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-22 20:15:07.897423 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:15:07.897434 | orchestrator | 2025-06-22 20:15:07.897445 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-06-22 20:15:07.897456 | orchestrator | Sunday 22 June 2025 20:13:05 +0000 (0:00:00.373) 0:00:05.347 *********** 2025-06-22 20:15:07.897468 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-22 20:15:07.897480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-22 20:15:07.897498 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:15:07.897509 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:15:07.897528 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-06-22 20:15:07.897540 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:15:07.897551 | orchestrator | 2025-06-22 20:15:07.897562 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-06-22 20:15:07.897574 | orchestrator | Sunday 22 June 2025 20:13:06 +0000 (0:00:00.776) 0:00:06.124 *********** 2025-06-22 20:15:07.897585 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-22 20:15:07.897597 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-22 20:15:07.897609 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-22 20:15:07.897621 | orchestrator | 2025-06-22 20:15:07.897631 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-06-22 20:15:07.897643 | orchestrator | Sunday 22 June 2025 20:13:07 +0000 (0:00:01.263) 0:00:07.387 *********** 2025-06-22 20:15:07.897654 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-22 20:15:07.897679 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-22 20:15:07.897691 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-22 20:15:07.897702 | orchestrator | 2025-06-22 20:15:07.897713 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-06-22 20:15:07.897724 | orchestrator | Sunday 22 June 2025 20:13:09 +0000 (0:00:01.234) 0:00:08.621 *********** 2025-06-22 20:15:07.897736 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:15:07.897746 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:15:07.897757 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:15:07.897768 | orchestrator | 2025-06-22 20:15:07.897779 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-06-22 20:15:07.897790 | orchestrator | Sunday 22 June 2025 20:13:09 +0000 (0:00:00.463) 0:00:09.085 *********** 2025-06-22 20:15:07.897801 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-06-22 20:15:07.897812 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-06-22 20:15:07.897863 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-06-22 20:15:07.897875 | orchestrator | 2025-06-22 20:15:07.897886 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-06-22 20:15:07.897897 | orchestrator | Sunday 22 June 2025 20:13:10 +0000 (0:00:01.247) 0:00:10.332 *********** 2025-06-22 20:15:07.897908 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-06-22 20:15:07.897919 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-06-22 20:15:07.897930 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-06-22 20:15:07.897941 | orchestrator | 2025-06-22 20:15:07.897952 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-06-22 20:15:07.897963 | orchestrator | Sunday 22 June 2025 20:13:11 +0000 (0:00:01.111) 0:00:11.444 *********** 2025-06-22 20:15:07.897981 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-22 20:15:07.897992 | orchestrator | 2025-06-22 20:15:07.898003 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-06-22 20:15:07.898014 | orchestrator | Sunday 22 June 2025 20:13:12 +0000 (0:00:00.763) 0:00:12.207 *********** 2025-06-22 20:15:07.898077 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-06-22 20:15:07.898088 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-06-22 20:15:07.898099 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:15:07.898111 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:15:07.898122 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:15:07.898132 | orchestrator | 2025-06-22 20:15:07.898143 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-06-22 20:15:07.898155 | orchestrator | Sunday 22 June 2025 20:13:13 +0000 (0:00:00.732) 0:00:12.939 *********** 2025-06-22 20:15:07.898165 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:15:07.898176 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:15:07.898187 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:15:07.898198 | orchestrator | 2025-06-22 20:15:07.898209 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-06-22 20:15:07.898220 | orchestrator | Sunday 22 June 2025 20:13:13 +0000 (0:00:00.504) 0:00:13.443 *********** 2025-06-22 20:15:07.898233 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1088797, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3219454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.898255 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1088797, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3219454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.898268 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1088797, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3219454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.898280 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1088791, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3169453, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.898299 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1088791, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3169453, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.898311 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1088791, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3169453, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.898323 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1088787, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3139453, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.898341 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1088787, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3139453, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.898353 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1088787, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3139453, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.898364 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1088795, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3179455, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.898376 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1088795, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3179455, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.898394 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1088795, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3179455, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.898406 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1088778, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3079453, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.898418 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1088778, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3079453, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.898436 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1088778, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3079453, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.898448 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1088788, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3149452, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.898460 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1088788, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3149452, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.898485 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1088788, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3149452, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.898497 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1088794, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3179455, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.898508 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1088794, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3179455, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.898527 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size2025-06-22 20:15:07 | INFO  | Task 37d1cf51-3dc9-45cc-a41b-c6b2b5845911 is in state SUCCESS 2025-06-22 20:15:07.898759 | orchestrator | ': 16156, 'inode': 1088794, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3179455, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.898890 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1088775, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3059452, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.898934 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1088775, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3059452, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.898947 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1088775, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3059452, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.898959 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1088769, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3029451, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.898972 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1088769, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3029451, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.899001 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1088769, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3029451, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.899013 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1088781, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3089452, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.899025 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1088781, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3089452, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.899043 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1088781, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3089452, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.899055 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1088772, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3049452, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.899067 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1088772, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3049452, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.899078 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1088772, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3049452, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.899099 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1088793, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3179455, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.899111 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1088793, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3179455, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.899129 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1088793, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3179455, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.899141 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1088784, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3109453, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.899153 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1088784, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3109453, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.899165 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1088784, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3109453, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.899185 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1088796, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3189454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.899197 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1088796, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3189454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.899214 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1088796, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3189454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.899225 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1088774, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3059452, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.899237 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1088774, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3059452, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.899248 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1088774, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3059452, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.899260 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1088789, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3159454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.899277 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1088789, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3159454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.899298 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1088789, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3159454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.899311 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1088770, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.303945, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.899325 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1088770, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.303945, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.899338 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1088770, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.303945, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.899351 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1088773, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3049452, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.899371 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1088773, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3049452, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.899391 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1088773, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3049452, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.899404 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1088786, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3129454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.899417 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1088786, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3129454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.899431 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1088786, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3129454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.899444 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1088821, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3399458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.899463 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1088821, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3399458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.899483 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1088821, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3399458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.899496 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1088817, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3339455, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.899509 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1088817, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3339455, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.899522 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1088817, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3339455, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.899535 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1088801, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3229454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.899555 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1088801, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3229454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.899590 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1088801, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3229454, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.899601 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1088828, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3439457, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.899613 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1088828, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3439457, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.899625 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1088828, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3439457, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.899636 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1088802, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3239455, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.899653 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1088802, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3239455, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.899671 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1088802, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3239455, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.899682 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1088826, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3419456, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.899694 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1088826, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3419456, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.899706 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1088826, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3419456, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.899717 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1088829, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.344946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.899729 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1088829, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.344946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.899753 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1088829, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.344946, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.899765 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1088822, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3409457, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.899777 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1088822, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3409457, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.899788 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1088822, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3409457, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.899800 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1088825, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3419456, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.899811 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1088825, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3419456, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.899873 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1088825, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3419456, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.899887 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1088804, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3259456, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.899898 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1088804, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3259456, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.899910 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1088804, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3259456, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.899921 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1088818, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3349457, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.899933 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1088818, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3349457, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.899959 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1088818, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3349457, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.899971 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1088830, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3459458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.899982 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1088830, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3459458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.899993 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1088830, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3459458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.900005 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1088827, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3429458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.900017 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1088827, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3429458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.900040 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1088827, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3429458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.900052 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1088808, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3289456, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.900063 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1088808, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3289456, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.900074 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1088808, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3289456, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.900086 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1088806, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3259456, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.900097 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1088806, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3259456, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.900120 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1088806, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3259456, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.900132 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1088812, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3289456, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.900143 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1088812, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3289456, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.900154 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1088812, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3289456, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.900166 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1088814, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3329456, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.900177 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1088814, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3329456, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.900195 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1088814, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3329456, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.900213 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1088819, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3359456, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.900225 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1088819, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3359456, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.900236 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1088819, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3359456, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.900248 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1088824, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3409457, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.900265 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1088824, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3409457, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.900276 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1088824, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3409457, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.900294 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1088820, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3359456, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.900306 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1088820, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3359456, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.900317 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1088820, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3359456, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.900329 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1088833, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3459458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.900347 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1088833, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3459458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.900359 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1088833, 'dev': 112, 'nlink': 1, 'atime': 1748870577.0, 'mtime': 1748870577.0, 'ctime': 1750620312.3459458, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-06-22 20:15:07.900371 | orchestrator | 2025-06-22 20:15:07.900383 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-06-22 20:15:07.900396 | orchestrator | Sunday 22 June 2025 20:13:50 +0000 (0:00:36.652) 0:00:50.096 *********** 2025-06-22 20:15:07.900413 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-22 20:15:07.900426 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-22 20:15:07.900438 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.1.20250530', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-06-22 20:15:07.900449 | orchestrator | 2025-06-22 20:15:07.900461 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-06-22 20:15:07.900472 | orchestrator | Sunday 22 June 2025 20:13:51 +0000 (0:00:01.205) 0:00:51.301 *********** 2025-06-22 20:15:07.900484 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:15:07.900495 | orchestrator | 2025-06-22 20:15:07.900512 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-06-22 20:15:07.900523 | orchestrator | Sunday 22 June 2025 20:13:53 +0000 (0:00:02.214) 0:00:53.515 *********** 2025-06-22 20:15:07.900534 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:15:07.900545 | orchestrator | 2025-06-22 20:15:07.900556 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-06-22 20:15:07.900567 | orchestrator | Sunday 22 June 2025 20:13:56 +0000 (0:00:02.188) 0:00:55.704 *********** 2025-06-22 20:15:07.900578 | orchestrator | 2025-06-22 20:15:07.900589 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-06-22 20:15:07.900600 | orchestrator | Sunday 22 June 2025 20:13:56 +0000 (0:00:00.231) 0:00:55.936 *********** 2025-06-22 20:15:07.900611 | orchestrator | 2025-06-22 20:15:07.900622 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-06-22 20:15:07.900633 | orchestrator | Sunday 22 June 2025 20:13:56 +0000 (0:00:00.062) 0:00:55.998 *********** 2025-06-22 20:15:07.900644 | orchestrator | 2025-06-22 20:15:07.900655 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-06-22 20:15:07.900666 | orchestrator | Sunday 22 June 2025 20:13:56 +0000 (0:00:00.066) 0:00:56.065 *********** 2025-06-22 20:15:07.900677 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:15:07.900688 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:15:07.900699 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:15:07.900709 | orchestrator | 2025-06-22 20:15:07.900720 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-06-22 20:15:07.900731 | orchestrator | Sunday 22 June 2025 20:13:58 +0000 (0:00:01.893) 0:00:57.958 *********** 2025-06-22 20:15:07.900742 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:15:07.900753 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:15:07.900764 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-06-22 20:15:07.900776 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2025-06-22 20:15:07.900787 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2025-06-22 20:15:07.900798 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:15:07.900809 | orchestrator | 2025-06-22 20:15:07.900850 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-06-22 20:15:07.900870 | orchestrator | Sunday 22 June 2025 20:14:36 +0000 (0:00:38.224) 0:01:36.183 *********** 2025-06-22 20:15:07.900890 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:15:07.900908 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:15:07.900922 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:15:07.900933 | orchestrator | 2025-06-22 20:15:07.900944 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-06-22 20:15:07.900955 | orchestrator | Sunday 22 June 2025 20:15:00 +0000 (0:00:23.587) 0:01:59.771 *********** 2025-06-22 20:15:07.900966 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:15:07.900977 | orchestrator | 2025-06-22 20:15:07.900988 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-06-22 20:15:07.901005 | orchestrator | Sunday 22 June 2025 20:15:02 +0000 (0:00:02.422) 0:02:02.193 *********** 2025-06-22 20:15:07.901017 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:15:07.901028 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:15:07.901038 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:15:07.901050 | orchestrator | 2025-06-22 20:15:07.901060 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-06-22 20:15:07.901071 | orchestrator | Sunday 22 June 2025 20:15:02 +0000 (0:00:00.314) 0:02:02.508 *********** 2025-06-22 20:15:07.901084 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-06-22 20:15:07.901107 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-06-22 20:15:07.901119 | orchestrator | 2025-06-22 20:15:07.901130 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-06-22 20:15:07.901141 | orchestrator | Sunday 22 June 2025 20:15:05 +0000 (0:00:02.794) 0:02:05.302 *********** 2025-06-22 20:15:07.901152 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:15:07.901162 | orchestrator | 2025-06-22 20:15:07.901173 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 20:15:07.901184 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-06-22 20:15:07.901196 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-06-22 20:15:07.901207 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-06-22 20:15:07.901218 | orchestrator | 2025-06-22 20:15:07.901228 | orchestrator | 2025-06-22 20:15:07.901239 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 20:15:07.901250 | orchestrator | Sunday 22 June 2025 20:15:05 +0000 (0:00:00.265) 0:02:05.567 *********** 2025-06-22 20:15:07.901261 | orchestrator | =============================================================================== 2025-06-22 20:15:07.901271 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 38.22s 2025-06-22 20:15:07.901282 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 36.65s 2025-06-22 20:15:07.901293 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 23.59s 2025-06-22 20:15:07.901304 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.79s 2025-06-22 20:15:07.901314 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.42s 2025-06-22 20:15:07.901326 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.21s 2025-06-22 20:15:07.901336 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.19s 2025-06-22 20:15:07.901347 | orchestrator | grafana : Restart first grafana container ------------------------------- 1.89s 2025-06-22 20:15:07.901358 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.33s 2025-06-22 20:15:07.901369 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.26s 2025-06-22 20:15:07.901380 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.25s 2025-06-22 20:15:07.901391 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.23s 2025-06-22 20:15:07.901401 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.21s 2025-06-22 20:15:07.901412 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.11s 2025-06-22 20:15:07.901423 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.81s 2025-06-22 20:15:07.901434 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.78s 2025-06-22 20:15:07.901444 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.76s 2025-06-22 20:15:07.901455 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.73s 2025-06-22 20:15:07.901466 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.71s 2025-06-22 20:15:07.901476 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.66s 2025-06-22 20:15:07.901487 | orchestrator | 2025-06-22 20:15:07 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:15:10.933596 | orchestrator | 2025-06-22 20:15:10 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:15:10.933685 | orchestrator | 2025-06-22 20:15:10 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:15:13.974690 | orchestrator | 2025-06-22 20:15:13 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:15:13.974790 | orchestrator | 2025-06-22 20:15:13 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:15:17.029969 | orchestrator | 2025-06-22 20:15:17 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:15:17.030134 | orchestrator | 2025-06-22 20:15:17 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:15:20.067611 | orchestrator | 2025-06-22 20:15:20 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:15:20.067721 | orchestrator | 2025-06-22 20:15:20 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:15:23.096614 | orchestrator | 2025-06-22 20:15:23 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:15:23.096717 | orchestrator | 2025-06-22 20:15:23 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:15:26.151950 | orchestrator | 2025-06-22 20:15:26 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:15:26.152057 | orchestrator | 2025-06-22 20:15:26 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:15:29.203662 | orchestrator | 2025-06-22 20:15:29 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:15:29.203805 | orchestrator | 2025-06-22 20:15:29 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:15:32.251304 | orchestrator | 2025-06-22 20:15:32 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:15:32.251526 | orchestrator | 2025-06-22 20:15:32 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:15:35.286826 | orchestrator | 2025-06-22 20:15:35 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:15:35.286888 | orchestrator | 2025-06-22 20:15:35 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:15:38.324436 | orchestrator | 2025-06-22 20:15:38 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:15:38.324541 | orchestrator | 2025-06-22 20:15:38 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:15:41.371593 | orchestrator | 2025-06-22 20:15:41 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:15:41.371701 | orchestrator | 2025-06-22 20:15:41 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:15:44.409159 | orchestrator | 2025-06-22 20:15:44 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:15:44.409364 | orchestrator | 2025-06-22 20:15:44 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:15:47.458399 | orchestrator | 2025-06-22 20:15:47 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:15:47.458510 | orchestrator | 2025-06-22 20:15:47 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:15:50.499825 | orchestrator | 2025-06-22 20:15:50 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:15:50.499968 | orchestrator | 2025-06-22 20:15:50 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:15:53.539146 | orchestrator | 2025-06-22 20:15:53 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:15:53.539250 | orchestrator | 2025-06-22 20:15:53 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:15:56.584666 | orchestrator | 2025-06-22 20:15:56 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:15:56.584820 | orchestrator | 2025-06-22 20:15:56 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:15:59.629816 | orchestrator | 2025-06-22 20:15:59 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:15:59.629918 | orchestrator | 2025-06-22 20:15:59 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:16:02.668696 | orchestrator | 2025-06-22 20:16:02 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:16:02.668852 | orchestrator | 2025-06-22 20:16:02 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:16:05.709526 | orchestrator | 2025-06-22 20:16:05 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:16:05.709629 | orchestrator | 2025-06-22 20:16:05 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:16:08.752235 | orchestrator | 2025-06-22 20:16:08 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:16:08.754491 | orchestrator | 2025-06-22 20:16:08 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:16:11.803326 | orchestrator | 2025-06-22 20:16:11 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:16:11.803421 | orchestrator | 2025-06-22 20:16:11 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:16:14.846846 | orchestrator | 2025-06-22 20:16:14 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:16:14.846955 | orchestrator | 2025-06-22 20:16:14 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:16:17.891801 | orchestrator | 2025-06-22 20:16:17 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:16:17.891893 | orchestrator | 2025-06-22 20:16:17 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:16:20.933213 | orchestrator | 2025-06-22 20:16:20 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:16:20.933940 | orchestrator | 2025-06-22 20:16:20 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:16:23.991622 | orchestrator | 2025-06-22 20:16:23 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:16:23.991793 | orchestrator | 2025-06-22 20:16:23 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:16:27.042784 | orchestrator | 2025-06-22 20:16:27 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:16:27.042912 | orchestrator | 2025-06-22 20:16:27 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:16:30.083699 | orchestrator | 2025-06-22 20:16:30 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:16:30.083801 | orchestrator | 2025-06-22 20:16:30 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:16:33.132976 | orchestrator | 2025-06-22 20:16:33 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:16:33.134529 | orchestrator | 2025-06-22 20:16:33 | INFO  | Task 3842316a-cc7e-4437-a206-5584dd86e962 is in state STARTED 2025-06-22 20:16:33.134593 | orchestrator | 2025-06-22 20:16:33 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:16:36.181211 | orchestrator | 2025-06-22 20:16:36 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:16:36.182468 | orchestrator | 2025-06-22 20:16:36 | INFO  | Task 3842316a-cc7e-4437-a206-5584dd86e962 is in state STARTED 2025-06-22 20:16:36.183067 | orchestrator | 2025-06-22 20:16:36 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:16:39.235273 | orchestrator | 2025-06-22 20:16:39 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:16:39.236641 | orchestrator | 2025-06-22 20:16:39 | INFO  | Task 3842316a-cc7e-4437-a206-5584dd86e962 is in state STARTED 2025-06-22 20:16:39.236699 | orchestrator | 2025-06-22 20:16:39 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:16:42.287928 | orchestrator | 2025-06-22 20:16:42 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:16:42.289764 | orchestrator | 2025-06-22 20:16:42 | INFO  | Task 3842316a-cc7e-4437-a206-5584dd86e962 is in state STARTED 2025-06-22 20:16:42.290169 | orchestrator | 2025-06-22 20:16:42 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:16:45.339606 | orchestrator | 2025-06-22 20:16:45 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:16:45.340384 | orchestrator | 2025-06-22 20:16:45 | INFO  | Task 3842316a-cc7e-4437-a206-5584dd86e962 is in state STARTED 2025-06-22 20:16:45.340606 | orchestrator | 2025-06-22 20:16:45 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:16:48.377409 | orchestrator | 2025-06-22 20:16:48 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:16:48.377811 | orchestrator | 2025-06-22 20:16:48 | INFO  | Task 3842316a-cc7e-4437-a206-5584dd86e962 is in state STARTED 2025-06-22 20:16:48.379105 | orchestrator | 2025-06-22 20:16:48 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:16:51.424592 | orchestrator | 2025-06-22 20:16:51 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:16:51.425913 | orchestrator | 2025-06-22 20:16:51 | INFO  | Task 3842316a-cc7e-4437-a206-5584dd86e962 is in state SUCCESS 2025-06-22 20:16:51.426555 | orchestrator | 2025-06-22 20:16:51 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:16:54.472956 | orchestrator | 2025-06-22 20:16:54 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:16:54.473059 | orchestrator | 2025-06-22 20:16:54 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:16:57.520666 | orchestrator | 2025-06-22 20:16:57 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:16:57.520776 | orchestrator | 2025-06-22 20:16:57 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:17:00.568189 | orchestrator | 2025-06-22 20:17:00 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:17:00.568291 | orchestrator | 2025-06-22 20:17:00 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:17:03.607050 | orchestrator | 2025-06-22 20:17:03 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:17:03.607160 | orchestrator | 2025-06-22 20:17:03 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:17:06.652036 | orchestrator | 2025-06-22 20:17:06 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:17:06.652138 | orchestrator | 2025-06-22 20:17:06 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:17:09.701961 | orchestrator | 2025-06-22 20:17:09 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:17:09.702182 | orchestrator | 2025-06-22 20:17:09 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:17:12.751007 | orchestrator | 2025-06-22 20:17:12 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:17:12.751112 | orchestrator | 2025-06-22 20:17:12 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:17:15.798880 | orchestrator | 2025-06-22 20:17:15 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:17:15.798987 | orchestrator | 2025-06-22 20:17:15 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:17:18.843490 | orchestrator | 2025-06-22 20:17:18 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:17:18.843625 | orchestrator | 2025-06-22 20:17:18 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:17:21.887448 | orchestrator | 2025-06-22 20:17:21 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:17:21.887540 | orchestrator | 2025-06-22 20:17:21 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:17:24.934161 | orchestrator | 2025-06-22 20:17:24 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:17:24.934265 | orchestrator | 2025-06-22 20:17:24 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:17:27.979870 | orchestrator | 2025-06-22 20:17:27 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:17:27.979978 | orchestrator | 2025-06-22 20:17:27 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:17:31.023002 | orchestrator | 2025-06-22 20:17:31 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:17:31.023104 | orchestrator | 2025-06-22 20:17:31 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:17:34.070478 | orchestrator | 2025-06-22 20:17:34 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:17:34.070631 | orchestrator | 2025-06-22 20:17:34 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:17:37.119091 | orchestrator | 2025-06-22 20:17:37 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:17:37.119197 | orchestrator | 2025-06-22 20:17:37 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:17:40.164805 | orchestrator | 2025-06-22 20:17:40 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:17:40.164917 | orchestrator | 2025-06-22 20:17:40 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:17:43.209805 | orchestrator | 2025-06-22 20:17:43 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:17:43.209908 | orchestrator | 2025-06-22 20:17:43 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:17:46.258737 | orchestrator | 2025-06-22 20:17:46 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:17:46.258859 | orchestrator | 2025-06-22 20:17:46 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:17:49.300241 | orchestrator | 2025-06-22 20:17:49 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:17:49.300315 | orchestrator | 2025-06-22 20:17:49 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:17:52.347043 | orchestrator | 2025-06-22 20:17:52 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:17:52.347155 | orchestrator | 2025-06-22 20:17:52 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:17:55.392270 | orchestrator | 2025-06-22 20:17:55 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:17:55.392392 | orchestrator | 2025-06-22 20:17:55 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:17:58.437266 | orchestrator | 2025-06-22 20:17:58 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:17:58.437380 | orchestrator | 2025-06-22 20:17:58 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:18:01.487595 | orchestrator | 2025-06-22 20:18:01 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:18:01.487704 | orchestrator | 2025-06-22 20:18:01 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:18:04.534733 | orchestrator | 2025-06-22 20:18:04 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:18:04.534845 | orchestrator | 2025-06-22 20:18:04 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:18:07.575703 | orchestrator | 2025-06-22 20:18:07 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:18:07.575809 | orchestrator | 2025-06-22 20:18:07 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:18:10.617507 | orchestrator | 2025-06-22 20:18:10 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:18:10.617665 | orchestrator | 2025-06-22 20:18:10 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:18:13.665687 | orchestrator | 2025-06-22 20:18:13 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:18:13.665792 | orchestrator | 2025-06-22 20:18:13 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:18:16.712658 | orchestrator | 2025-06-22 20:18:16 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:18:16.712772 | orchestrator | 2025-06-22 20:18:16 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:18:19.747229 | orchestrator | 2025-06-22 20:18:19 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:18:19.747316 | orchestrator | 2025-06-22 20:18:19 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:18:22.782676 | orchestrator | 2025-06-22 20:18:22 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:18:22.782783 | orchestrator | 2025-06-22 20:18:22 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:18:25.815038 | orchestrator | 2025-06-22 20:18:25 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:18:25.815141 | orchestrator | 2025-06-22 20:18:25 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:18:28.847935 | orchestrator | 2025-06-22 20:18:28 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:18:28.848019 | orchestrator | 2025-06-22 20:18:28 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:18:31.899787 | orchestrator | 2025-06-22 20:18:31 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:18:31.899885 | orchestrator | 2025-06-22 20:18:31 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:18:34.941751 | orchestrator | 2025-06-22 20:18:34 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:18:34.941849 | orchestrator | 2025-06-22 20:18:34 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:18:37.989600 | orchestrator | 2025-06-22 20:18:37 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:18:37.989864 | orchestrator | 2025-06-22 20:18:37 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:18:41.036567 | orchestrator | 2025-06-22 20:18:41 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:18:41.036662 | orchestrator | 2025-06-22 20:18:41 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:18:44.079803 | orchestrator | 2025-06-22 20:18:44 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:18:44.079882 | orchestrator | 2025-06-22 20:18:44 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:18:47.115360 | orchestrator | 2025-06-22 20:18:47 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:18:47.115626 | orchestrator | 2025-06-22 20:18:47 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:18:50.160767 | orchestrator | 2025-06-22 20:18:50 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:18:50.160862 | orchestrator | 2025-06-22 20:18:50 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:18:53.203547 | orchestrator | 2025-06-22 20:18:53 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:18:53.203663 | orchestrator | 2025-06-22 20:18:53 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:18:56.244480 | orchestrator | 2025-06-22 20:18:56 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:18:56.244567 | orchestrator | 2025-06-22 20:18:56 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:18:59.288575 | orchestrator | 2025-06-22 20:18:59 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state STARTED 2025-06-22 20:18:59.288666 | orchestrator | 2025-06-22 20:18:59 | INFO  | Wait 1 second(s) until the next check 2025-06-22 20:19:02.329332 | orchestrator | 2025-06-22 20:19:02 | INFO  | Task c5b54cd0-90eb-4700-984a-e1490d173ce1 is in state SUCCESS 2025-06-22 20:19:02.331651 | orchestrator | 2025-06-22 20:19:02.331777 | orchestrator | None 2025-06-22 20:19:02.331884 | orchestrator | 2025-06-22 20:19:02.331912 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 20:19:02.332001 | orchestrator | 2025-06-22 20:19:02.332023 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-06-22 20:19:02.332116 | orchestrator | Sunday 22 June 2025 20:10:24 +0000 (0:00:00.694) 0:00:00.694 *********** 2025-06-22 20:19:02.332131 | orchestrator | changed: [testbed-manager] 2025-06-22 20:19:02.332144 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:19:02.332155 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:19:02.332166 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:19:02.332177 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:19:02.332188 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:19:02.332199 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:19:02.332211 | orchestrator | 2025-06-22 20:19:02.332222 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 20:19:02.332233 | orchestrator | Sunday 22 June 2025 20:10:26 +0000 (0:00:01.331) 0:00:02.026 *********** 2025-06-22 20:19:02.332244 | orchestrator | changed: [testbed-manager] 2025-06-22 20:19:02.332255 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:19:02.332266 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:19:02.332277 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:19:02.332291 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:19:02.332311 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:19:02.332329 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:19:02.332347 | orchestrator | 2025-06-22 20:19:02.332366 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 20:19:02.332437 | orchestrator | Sunday 22 June 2025 20:10:27 +0000 (0:00:01.253) 0:00:03.279 *********** 2025-06-22 20:19:02.332481 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-06-22 20:19:02.332501 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-06-22 20:19:02.332589 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-06-22 20:19:02.332604 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-06-22 20:19:02.332615 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-06-22 20:19:02.332626 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-06-22 20:19:02.332638 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-06-22 20:19:02.332677 | orchestrator | 2025-06-22 20:19:02.332689 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-06-22 20:19:02.332700 | orchestrator | 2025-06-22 20:19:02.332711 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-06-22 20:19:02.332723 | orchestrator | Sunday 22 June 2025 20:10:29 +0000 (0:00:01.942) 0:00:05.221 *********** 2025-06-22 20:19:02.332774 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:19:02.332787 | orchestrator | 2025-06-22 20:19:02.332798 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-06-22 20:19:02.332810 | orchestrator | Sunday 22 June 2025 20:10:32 +0000 (0:00:02.689) 0:00:07.910 *********** 2025-06-22 20:19:02.332821 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-06-22 20:19:02.332832 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-06-22 20:19:02.332843 | orchestrator | 2025-06-22 20:19:02.332854 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-06-22 20:19:02.332866 | orchestrator | Sunday 22 June 2025 20:10:36 +0000 (0:00:04.636) 0:00:12.547 *********** 2025-06-22 20:19:02.332877 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-22 20:19:02.332887 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-06-22 20:19:02.332898 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:19:02.332909 | orchestrator | 2025-06-22 20:19:02.332920 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-06-22 20:19:02.332931 | orchestrator | Sunday 22 June 2025 20:10:41 +0000 (0:00:04.497) 0:00:17.045 *********** 2025-06-22 20:19:02.332949 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:19:02.332966 | orchestrator | 2025-06-22 20:19:02.332985 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-06-22 20:19:02.333003 | orchestrator | Sunday 22 June 2025 20:10:42 +0000 (0:00:00.817) 0:00:17.862 *********** 2025-06-22 20:19:02.333022 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:19:02.333041 | orchestrator | 2025-06-22 20:19:02.333060 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-06-22 20:19:02.333078 | orchestrator | Sunday 22 June 2025 20:10:43 +0000 (0:00:01.508) 0:00:19.371 *********** 2025-06-22 20:19:02.333091 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:19:02.333102 | orchestrator | 2025-06-22 20:19:02.333113 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-06-22 20:19:02.333124 | orchestrator | Sunday 22 June 2025 20:10:47 +0000 (0:00:03.804) 0:00:23.176 *********** 2025-06-22 20:19:02.333135 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:19:02.333145 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:02.333156 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:02.333167 | orchestrator | 2025-06-22 20:19:02.333177 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-06-22 20:19:02.333188 | orchestrator | Sunday 22 June 2025 20:10:48 +0000 (0:00:00.735) 0:00:23.911 *********** 2025-06-22 20:19:02.333199 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:19:02.333210 | orchestrator | 2025-06-22 20:19:02.333221 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-06-22 20:19:02.333232 | orchestrator | Sunday 22 June 2025 20:11:19 +0000 (0:00:31.714) 0:00:55.626 *********** 2025-06-22 20:19:02.333242 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:19:02.333253 | orchestrator | 2025-06-22 20:19:02.333264 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-06-22 20:19:02.333276 | orchestrator | Sunday 22 June 2025 20:11:34 +0000 (0:00:14.878) 0:01:10.504 *********** 2025-06-22 20:19:02.333287 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:19:02.333297 | orchestrator | 2025-06-22 20:19:02.333308 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-06-22 20:19:02.333320 | orchestrator | Sunday 22 June 2025 20:11:45 +0000 (0:00:11.328) 0:01:21.832 *********** 2025-06-22 20:19:02.333355 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:19:02.333367 | orchestrator | 2025-06-22 20:19:02.333388 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-06-22 20:19:02.333400 | orchestrator | Sunday 22 June 2025 20:11:46 +0000 (0:00:00.831) 0:01:22.664 *********** 2025-06-22 20:19:02.333410 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:19:02.333421 | orchestrator | 2025-06-22 20:19:02.333432 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-06-22 20:19:02.333443 | orchestrator | Sunday 22 June 2025 20:11:47 +0000 (0:00:00.458) 0:01:23.122 *********** 2025-06-22 20:19:02.333479 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:19:02.333490 | orchestrator | 2025-06-22 20:19:02.333501 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-06-22 20:19:02.333513 | orchestrator | Sunday 22 June 2025 20:11:47 +0000 (0:00:00.482) 0:01:23.605 *********** 2025-06-22 20:19:02.333523 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:19:02.333534 | orchestrator | 2025-06-22 20:19:02.333545 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-06-22 20:19:02.333556 | orchestrator | Sunday 22 June 2025 20:12:05 +0000 (0:00:17.850) 0:01:41.456 *********** 2025-06-22 20:19:02.333567 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:19:02.333578 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:02.333588 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:02.333599 | orchestrator | 2025-06-22 20:19:02.333613 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-06-22 20:19:02.333634 | orchestrator | 2025-06-22 20:19:02.333655 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-06-22 20:19:02.333718 | orchestrator | Sunday 22 June 2025 20:12:06 +0000 (0:00:00.420) 0:01:41.877 *********** 2025-06-22 20:19:02.333733 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:19:02.333744 | orchestrator | 2025-06-22 20:19:02.333755 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-06-22 20:19:02.333766 | orchestrator | Sunday 22 June 2025 20:12:06 +0000 (0:00:00.842) 0:01:42.719 *********** 2025-06-22 20:19:02.333777 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:02.333787 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:02.333798 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:19:02.333809 | orchestrator | 2025-06-22 20:19:02.333820 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-06-22 20:19:02.333831 | orchestrator | Sunday 22 June 2025 20:12:08 +0000 (0:00:02.054) 0:01:44.774 *********** 2025-06-22 20:19:02.333842 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:02.333853 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:02.333864 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:19:02.333875 | orchestrator | 2025-06-22 20:19:02.333886 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-06-22 20:19:02.333897 | orchestrator | Sunday 22 June 2025 20:12:11 +0000 (0:00:02.314) 0:01:47.088 *********** 2025-06-22 20:19:02.333908 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:19:02.333918 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:02.333929 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:02.333940 | orchestrator | 2025-06-22 20:19:02.333951 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-06-22 20:19:02.333962 | orchestrator | Sunday 22 June 2025 20:12:11 +0000 (0:00:00.308) 0:01:47.397 *********** 2025-06-22 20:19:02.333973 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-06-22 20:19:02.333984 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:02.333995 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-06-22 20:19:02.334006 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:02.334210 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-06-22 20:19:02.334229 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-06-22 20:19:02.334240 | orchestrator | 2025-06-22 20:19:02.334251 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-06-22 20:19:02.334273 | orchestrator | Sunday 22 June 2025 20:12:21 +0000 (0:00:09.811) 0:01:57.209 *********** 2025-06-22 20:19:02.334284 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:19:02.334295 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:02.334305 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:02.334316 | orchestrator | 2025-06-22 20:19:02.334327 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-06-22 20:19:02.334338 | orchestrator | Sunday 22 June 2025 20:12:21 +0000 (0:00:00.330) 0:01:57.539 *********** 2025-06-22 20:19:02.334349 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-06-22 20:19:02.334360 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:19:02.334371 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-06-22 20:19:02.334382 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:02.334392 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-06-22 20:19:02.334403 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:02.334414 | orchestrator | 2025-06-22 20:19:02.334425 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-06-22 20:19:02.334436 | orchestrator | Sunday 22 June 2025 20:12:22 +0000 (0:00:00.691) 0:01:58.230 *********** 2025-06-22 20:19:02.334446 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:19:02.334513 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:02.334525 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:02.334536 | orchestrator | 2025-06-22 20:19:02.334547 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-06-22 20:19:02.334559 | orchestrator | Sunday 22 June 2025 20:12:22 +0000 (0:00:00.561) 0:01:58.792 *********** 2025-06-22 20:19:02.334570 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:02.334581 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:02.334592 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:19:02.334603 | orchestrator | 2025-06-22 20:19:02.334614 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-06-22 20:19:02.334626 | orchestrator | Sunday 22 June 2025 20:12:23 +0000 (0:00:00.889) 0:01:59.681 *********** 2025-06-22 20:19:02.334636 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:02.334666 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:02.334678 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:19:02.334689 | orchestrator | 2025-06-22 20:19:02.334700 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-06-22 20:19:02.334711 | orchestrator | Sunday 22 June 2025 20:12:25 +0000 (0:00:01.989) 0:02:01.671 *********** 2025-06-22 20:19:02.334722 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:02.334733 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:02.334744 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:19:02.334755 | orchestrator | 2025-06-22 20:19:02.334766 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-06-22 20:19:02.334777 | orchestrator | Sunday 22 June 2025 20:12:46 +0000 (0:00:20.739) 0:02:22.411 *********** 2025-06-22 20:19:02.334788 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:02.334799 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:02.334809 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:19:02.334820 | orchestrator | 2025-06-22 20:19:02.334831 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-06-22 20:19:02.334842 | orchestrator | Sunday 22 June 2025 20:12:58 +0000 (0:00:12.055) 0:02:34.466 *********** 2025-06-22 20:19:02.334853 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:19:02.334864 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:02.334875 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:02.334886 | orchestrator | 2025-06-22 20:19:02.334897 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-06-22 20:19:02.334908 | orchestrator | Sunday 22 June 2025 20:12:59 +0000 (0:00:00.955) 0:02:35.422 *********** 2025-06-22 20:19:02.334919 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:02.334930 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:02.334948 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:19:02.334959 | orchestrator | 2025-06-22 20:19:02.334970 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-06-22 20:19:02.334982 | orchestrator | Sunday 22 June 2025 20:13:09 +0000 (0:00:10.081) 0:02:45.503 *********** 2025-06-22 20:19:02.334993 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:19:02.335003 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:02.335014 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:02.335025 | orchestrator | 2025-06-22 20:19:02.335035 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-06-22 20:19:02.335044 | orchestrator | Sunday 22 June 2025 20:13:11 +0000 (0:00:01.486) 0:02:46.989 *********** 2025-06-22 20:19:02.335054 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:19:02.335064 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:02.335073 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:02.335083 | orchestrator | 2025-06-22 20:19:02.335093 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-06-22 20:19:02.335103 | orchestrator | 2025-06-22 20:19:02.335113 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-06-22 20:19:02.335123 | orchestrator | Sunday 22 June 2025 20:13:11 +0000 (0:00:00.353) 0:02:47.342 *********** 2025-06-22 20:19:02.335132 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:19:02.335143 | orchestrator | 2025-06-22 20:19:02.335153 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-06-22 20:19:02.335163 | orchestrator | Sunday 22 June 2025 20:13:12 +0000 (0:00:00.573) 0:02:47.916 *********** 2025-06-22 20:19:02.335173 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-06-22 20:19:02.335183 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-06-22 20:19:02.335192 | orchestrator | 2025-06-22 20:19:02.335202 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-06-22 20:19:02.335212 | orchestrator | Sunday 22 June 2025 20:13:15 +0000 (0:00:03.194) 0:02:51.111 *********** 2025-06-22 20:19:02.335222 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-06-22 20:19:02.335234 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-06-22 20:19:02.335243 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-06-22 20:19:02.335253 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-06-22 20:19:02.335263 | orchestrator | 2025-06-22 20:19:02.335273 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-06-22 20:19:02.335283 | orchestrator | Sunday 22 June 2025 20:13:22 +0000 (0:00:07.163) 0:02:58.274 *********** 2025-06-22 20:19:02.335292 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-06-22 20:19:02.335302 | orchestrator | 2025-06-22 20:19:02.335312 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-06-22 20:19:02.335322 | orchestrator | Sunday 22 June 2025 20:13:25 +0000 (0:00:03.038) 0:03:01.312 *********** 2025-06-22 20:19:02.335331 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-06-22 20:19:02.335341 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-06-22 20:19:02.335350 | orchestrator | 2025-06-22 20:19:02.335360 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-06-22 20:19:02.335370 | orchestrator | Sunday 22 June 2025 20:13:29 +0000 (0:00:03.667) 0:03:04.980 *********** 2025-06-22 20:19:02.335379 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-06-22 20:19:02.335389 | orchestrator | 2025-06-22 20:19:02.335399 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-06-22 20:19:02.335409 | orchestrator | Sunday 22 June 2025 20:13:32 +0000 (0:00:03.370) 0:03:08.351 *********** 2025-06-22 20:19:02.335424 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-06-22 20:19:02.335434 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-06-22 20:19:02.335444 | orchestrator | 2025-06-22 20:19:02.335470 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-06-22 20:19:02.335488 | orchestrator | Sunday 22 June 2025 20:13:40 +0000 (0:00:07.545) 0:03:15.897 *********** 2025-06-22 20:19:02.335505 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-22 20:19:02.335521 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-22 20:19:02.335534 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-22 20:19:02.335561 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 20:19:02.335574 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 20:19:02.335584 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 20:19:02.335595 | orchestrator | 2025-06-22 20:19:02.335605 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-06-22 20:19:02.335615 | orchestrator | Sunday 22 June 2025 20:13:41 +0000 (0:00:01.312) 0:03:17.209 *********** 2025-06-22 20:19:02.335625 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:19:02.335635 | orchestrator | 2025-06-22 20:19:02.335645 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-06-22 20:19:02.335655 | orchestrator | Sunday 22 June 2025 20:13:41 +0000 (0:00:00.121) 0:03:17.330 *********** 2025-06-22 20:19:02.335664 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:19:02.335674 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:02.335684 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:02.335694 | orchestrator | 2025-06-22 20:19:02.335704 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-06-22 20:19:02.335714 | orchestrator | Sunday 22 June 2025 20:13:41 +0000 (0:00:00.505) 0:03:17.836 *********** 2025-06-22 20:19:02.335724 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-06-22 20:19:02.335733 | orchestrator | 2025-06-22 20:19:02.335743 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-06-22 20:19:02.335753 | orchestrator | Sunday 22 June 2025 20:13:42 +0000 (0:00:00.689) 0:03:18.526 *********** 2025-06-22 20:19:02.335762 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:19:02.335772 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:02.335782 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:02.335792 | orchestrator | 2025-06-22 20:19:02.335802 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-06-22 20:19:02.335811 | orchestrator | Sunday 22 June 2025 20:13:42 +0000 (0:00:00.316) 0:03:18.842 *********** 2025-06-22 20:19:02.335821 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:19:02.335838 | orchestrator | 2025-06-22 20:19:02.335848 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-06-22 20:19:02.335858 | orchestrator | Sunday 22 June 2025 20:13:43 +0000 (0:00:00.687) 0:03:19.530 *********** 2025-06-22 20:19:02.335874 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-22 20:19:02.335887 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-22 20:19:02.335899 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-22 20:19:02.335910 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 20:19:02.335927 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 20:19:02.335946 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 20:19:02.335957 | orchestrator | 2025-06-22 20:19:02.335967 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-06-22 20:19:02.335977 | orchestrator | Sunday 22 June 2025 20:13:46 +0000 (0:00:02.405) 0:03:21.935 *********** 2025-06-22 20:19:02.335987 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-22 20:19:02.335999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 20:19:02.336009 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:19:02.336020 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-22 20:19:02.336042 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 20:19:02.336053 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:02.336064 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-22 20:19:02.336075 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 20:19:02.336085 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:02.336095 | orchestrator | 2025-06-22 20:19:02.336105 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-06-22 20:19:02.336115 | orchestrator | Sunday 22 June 2025 20:13:46 +0000 (0:00:00.557) 0:03:22.493 *********** 2025-06-22 20:19:02.336131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-22 20:19:02.336142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 20:19:02.336153 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:19:02.336170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-22 20:19:02.336182 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 20:19:02.336192 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:02.336216 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-22 20:19:02.336234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 20:19:02.336244 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:02.336254 | orchestrator | 2025-06-22 20:19:02.336264 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-06-22 20:19:02.336274 | orchestrator | Sunday 22 June 2025 20:13:47 +0000 (0:00:00.963) 0:03:23.457 *********** 2025-06-22 20:19:02.336291 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-22 20:19:02.336308 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-22 20:19:02.336325 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-22 20:19:02.336342 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 20:19:02.336353 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 20:19:02.336364 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 20:19:02.336374 | orchestrator | 2025-06-22 20:19:02.336384 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-06-22 20:19:02.336394 | orchestrator | Sunday 22 June 2025 20:13:50 +0000 (0:00:02.444) 0:03:25.901 *********** 2025-06-22 20:19:02.336414 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-22 20:19:02.336432 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-22 20:19:02.336450 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-22 20:19:02.336509 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 20:19:02.336526 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 20:19:02.336537 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 20:19:02.336548 | orchestrator | 2025-06-22 20:19:02.336558 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-06-22 20:19:02.336568 | orchestrator | Sunday 22 June 2025 20:13:55 +0000 (0:00:05.569) 0:03:31.471 *********** 2025-06-22 20:19:02.336585 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-22 20:19:02.336633 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 20:19:02.336644 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:19:02.336659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-22 20:19:02.336678 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 20:19:02.336688 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:02.336699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-06-22 20:19:02.336717 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-06-22 20:19:02.336728 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:02.336738 | orchestrator | 2025-06-22 20:19:02.336747 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-06-22 20:19:02.336758 | orchestrator | Sunday 22 June 2025 20:13:56 +0000 (0:00:00.609) 0:03:32.081 *********** 2025-06-22 20:19:02.336767 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:19:02.336777 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:19:02.336787 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:19:02.336796 | orchestrator | 2025-06-22 20:19:02.336806 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-06-22 20:19:02.336823 | orchestrator | Sunday 22 June 2025 20:13:58 +0000 (0:00:02.066) 0:03:34.147 *********** 2025-06-22 20:19:02.336832 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:19:02.336842 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:02.336852 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:02.336861 | orchestrator | 2025-06-22 20:19:02.336871 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-06-22 20:19:02.336881 | orchestrator | Sunday 22 June 2025 20:13:58 +0000 (0:00:00.344) 0:03:34.491 *********** 2025-06-22 20:19:02.336895 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-22 20:19:02.336907 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-22 20:19:02.336928 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250530', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-06-22 20:19:02.336945 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 20:19:02.336960 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 20:19:02.336971 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-06-22 20:19:02.336981 | orchestrator | 2025-06-22 20:19:02.336991 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-06-22 20:19:02.337001 | orchestrator | Sunday 22 June 2025 20:14:00 +0000 (0:00:01.768) 0:03:36.260 *********** 2025-06-22 20:19:02.337011 | orchestrator | 2025-06-22 20:19:02.337018 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-06-22 20:19:02.337027 | orchestrator | Sunday 22 June 2025 20:14:00 +0000 (0:00:00.133) 0:03:36.394 *********** 2025-06-22 20:19:02.337034 | orchestrator | 2025-06-22 20:19:02.337042 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-06-22 20:19:02.337051 | orchestrator | Sunday 22 June 2025 20:14:00 +0000 (0:00:00.128) 0:03:36.522 *********** 2025-06-22 20:19:02.337059 | orchestrator | 2025-06-22 20:19:02.337066 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-06-22 20:19:02.337074 | orchestrator | Sunday 22 June 2025 20:14:00 +0000 (0:00:00.274) 0:03:36.796 *********** 2025-06-22 20:19:02.337082 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:19:02.337090 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:19:02.337098 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:19:02.337106 | orchestrator | 2025-06-22 20:19:02.337114 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-06-22 20:19:02.337122 | orchestrator | Sunday 22 June 2025 20:14:26 +0000 (0:00:25.087) 0:04:01.884 *********** 2025-06-22 20:19:02.337130 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:19:02.337138 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:19:02.337146 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:19:02.337154 | orchestrator | 2025-06-22 20:19:02.337162 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-06-22 20:19:02.337170 | orchestrator | 2025-06-22 20:19:02.337177 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-06-22 20:19:02.337191 | orchestrator | Sunday 22 June 2025 20:14:35 +0000 (0:00:09.853) 0:04:11.737 *********** 2025-06-22 20:19:02.337199 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:19:02.337209 | orchestrator | 2025-06-22 20:19:02.337221 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-06-22 20:19:02.337229 | orchestrator | Sunday 22 June 2025 20:14:37 +0000 (0:00:01.147) 0:04:12.885 *********** 2025-06-22 20:19:02.337237 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:19:02.337245 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:19:02.337253 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:19:02.337261 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:19:02.337269 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:02.337276 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:02.337284 | orchestrator | 2025-06-22 20:19:02.337292 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-06-22 20:19:02.337300 | orchestrator | Sunday 22 June 2025 20:14:37 +0000 (0:00:00.741) 0:04:13.626 *********** 2025-06-22 20:19:02.337308 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:19:02.337316 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:02.337324 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:02.337332 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 20:19:02.337340 | orchestrator | 2025-06-22 20:19:02.337347 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-06-22 20:19:02.337355 | orchestrator | Sunday 22 June 2025 20:14:38 +0000 (0:00:01.132) 0:04:14.758 *********** 2025-06-22 20:19:02.337363 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2025-06-22 20:19:02.337371 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2025-06-22 20:19:02.337379 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2025-06-22 20:19:02.337387 | orchestrator | 2025-06-22 20:19:02.337395 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-06-22 20:19:02.337403 | orchestrator | Sunday 22 June 2025 20:14:39 +0000 (0:00:00.855) 0:04:15.613 *********** 2025-06-22 20:19:02.337411 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-06-22 20:19:02.337419 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-06-22 20:19:02.337427 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-06-22 20:19:02.337435 | orchestrator | 2025-06-22 20:19:02.337443 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-06-22 20:19:02.337466 | orchestrator | Sunday 22 June 2025 20:14:41 +0000 (0:00:01.308) 0:04:16.922 *********** 2025-06-22 20:19:02.337474 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-06-22 20:19:02.337482 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:19:02.337490 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-06-22 20:19:02.337498 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:19:02.337506 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-06-22 20:19:02.337514 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:19:02.337522 | orchestrator | 2025-06-22 20:19:02.337530 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-06-22 20:19:02.337543 | orchestrator | Sunday 22 June 2025 20:14:41 +0000 (0:00:00.692) 0:04:17.614 *********** 2025-06-22 20:19:02.337552 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-22 20:19:02.337559 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-22 20:19:02.337567 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:19:02.337575 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-22 20:19:02.337583 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-22 20:19:02.337600 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:02.337608 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-06-22 20:19:02.337616 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-06-22 20:19:02.337624 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:02.337632 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-06-22 20:19:02.337640 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-06-22 20:19:02.337648 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-06-22 20:19:02.337656 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-06-22 20:19:02.337664 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-06-22 20:19:02.337672 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-06-22 20:19:02.337679 | orchestrator | 2025-06-22 20:19:02.337687 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-06-22 20:19:02.337695 | orchestrator | Sunday 22 June 2025 20:14:43 +0000 (0:00:01.252) 0:04:18.867 *********** 2025-06-22 20:19:02.337703 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:19:02.337711 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:02.337719 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:02.337727 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:19:02.337735 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:19:02.337743 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:19:02.337751 | orchestrator | 2025-06-22 20:19:02.337759 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-06-22 20:19:02.337767 | orchestrator | Sunday 22 June 2025 20:14:44 +0000 (0:00:01.219) 0:04:20.087 *********** 2025-06-22 20:19:02.337775 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:19:02.337782 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:02.337790 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:02.337798 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:19:02.337806 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:19:02.337814 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:19:02.337822 | orchestrator | 2025-06-22 20:19:02.337830 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-06-22 20:19:02.337838 | orchestrator | Sunday 22 June 2025 20:14:45 +0000 (0:00:01.681) 0:04:21.768 *********** 2025-06-22 20:19:02.337852 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-22 20:19:02.337862 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-22 20:19:02.337880 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-22 20:19:02.337890 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-22 20:19:02.337900 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-22 20:19:02.337915 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-22 20:19:02.337924 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-22 20:19:02.337932 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-22 20:19:02.337953 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-22 20:19:02.337963 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:19:02.337972 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-22 20:19:02.338163 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-22 20:19:02.338181 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:19:02.338189 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-22 20:19:02.338210 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:19:02.338219 | orchestrator | 2025-06-22 20:19:02.338227 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-06-22 20:19:02.338236 | orchestrator | Sunday 22 June 2025 20:14:48 +0000 (0:00:02.645) 0:04:24.414 *********** 2025-06-22 20:19:02.338244 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:19:02.338253 | orchestrator | 2025-06-22 20:19:02.338261 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-06-22 20:19:02.338269 | orchestrator | Sunday 22 June 2025 20:14:49 +0000 (0:00:01.155) 0:04:25.569 *********** 2025-06-22 20:19:02.338277 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-22 20:19:02.338294 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-22 20:19:02.338303 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-22 20:19:02.338320 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-22 20:19:02.338329 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-22 20:19:02.338337 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-22 20:19:02.338346 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-22 20:19:02.338359 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-22 20:19:02.338367 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-22 20:19:02.338380 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:19:02.338392 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:19:02.338401 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:19:02.338409 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-22 20:19:02.338422 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-22 20:19:02.338431 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-22 20:19:02.338444 | orchestrator | 2025-06-22 20:19:02.338468 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-06-22 20:19:02.338477 | orchestrator | Sunday 22 June 2025 20:14:53 +0000 (0:00:03.560) 0:04:29.130 *********** 2025-06-22 20:19:02.338489 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-22 20:19:02.338499 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-22 20:19:02.338507 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-22 20:19:02.338515 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:19:02.338530 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-22 20:19:02.338544 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-22 20:19:02.338552 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-22 20:19:02.338560 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:19:02.338572 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-22 20:19:02.338581 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-22 20:19:02.338589 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-22 20:19:02.338598 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:19:02.338662 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-22 20:19:02.338671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-22 20:19:02.338680 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:19:02.338688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-22 20:19:02.338700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-22 20:19:02.338708 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:02.338716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-22 20:19:02.338725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-22 20:19:02.338733 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:02.338741 | orchestrator | 2025-06-22 20:19:02.338749 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-06-22 20:19:02.338763 | orchestrator | Sunday 22 June 2025 20:14:55 +0000 (0:00:01.781) 0:04:30.911 *********** 2025-06-22 20:19:02.338778 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-22 20:19:02.338789 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-22 20:19:02.338805 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-22 20:19:02.338843 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:19:02.338853 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-22 20:19:02.338862 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-22 20:19:02.338885 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-22 20:19:02.338895 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:19:02.338905 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-22 20:19:02.338915 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-22 20:19:02.338928 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-22 20:19:02.338937 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:19:02.338947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-22 20:19:02.338962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-22 20:19:02.338971 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:02.338987 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-22 20:19:02.338996 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-22 20:19:02.339006 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:02.339016 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-22 20:19:02.339028 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-22 20:19:02.339038 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:19:02.339047 | orchestrator | 2025-06-22 20:19:02.339056 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-06-22 20:19:02.339065 | orchestrator | Sunday 22 June 2025 20:14:56 +0000 (0:00:01.912) 0:04:32.823 *********** 2025-06-22 20:19:02.339074 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:19:02.339084 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:02.339093 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:02.339102 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-06-22 20:19:02.339111 | orchestrator | 2025-06-22 20:19:02.339120 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-06-22 20:19:02.339133 | orchestrator | Sunday 22 June 2025 20:14:57 +0000 (0:00:00.825) 0:04:33.648 *********** 2025-06-22 20:19:02.339141 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-22 20:19:02.339149 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-06-22 20:19:02.339157 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-06-22 20:19:02.339165 | orchestrator | 2025-06-22 20:19:02.339173 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2025-06-22 20:19:02.339181 | orchestrator | Sunday 22 June 2025 20:14:58 +0000 (0:00:01.063) 0:04:34.712 *********** 2025-06-22 20:19:02.339189 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-22 20:19:02.339197 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-06-22 20:19:02.339205 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-06-22 20:19:02.339213 | orchestrator | 2025-06-22 20:19:02.339220 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2025-06-22 20:19:02.339229 | orchestrator | Sunday 22 June 2025 20:14:59 +0000 (0:00:00.913) 0:04:35.625 *********** 2025-06-22 20:19:02.339237 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:19:02.339245 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:19:02.339253 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:19:02.339260 | orchestrator | 2025-06-22 20:19:02.339269 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2025-06-22 20:19:02.339277 | orchestrator | Sunday 22 June 2025 20:15:00 +0000 (0:00:00.494) 0:04:36.119 *********** 2025-06-22 20:19:02.339285 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:19:02.339293 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:19:02.339301 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:19:02.339309 | orchestrator | 2025-06-22 20:19:02.339317 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2025-06-22 20:19:02.339325 | orchestrator | Sunday 22 June 2025 20:15:00 +0000 (0:00:00.487) 0:04:36.607 *********** 2025-06-22 20:19:02.339333 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-06-22 20:19:02.339345 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-06-22 20:19:02.339353 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-06-22 20:19:02.339361 | orchestrator | 2025-06-22 20:19:02.339369 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2025-06-22 20:19:02.339377 | orchestrator | Sunday 22 June 2025 20:15:02 +0000 (0:00:01.420) 0:04:38.028 *********** 2025-06-22 20:19:02.339386 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-06-22 20:19:02.339393 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-06-22 20:19:02.339401 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-06-22 20:19:02.339409 | orchestrator | 2025-06-22 20:19:02.339417 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2025-06-22 20:19:02.339425 | orchestrator | Sunday 22 June 2025 20:15:03 +0000 (0:00:01.340) 0:04:39.369 *********** 2025-06-22 20:19:02.339433 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-06-22 20:19:02.339441 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-06-22 20:19:02.339449 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-06-22 20:19:02.339505 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2025-06-22 20:19:02.339514 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2025-06-22 20:19:02.339522 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2025-06-22 20:19:02.339530 | orchestrator | 2025-06-22 20:19:02.339538 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2025-06-22 20:19:02.339546 | orchestrator | Sunday 22 June 2025 20:15:07 +0000 (0:00:03.909) 0:04:43.279 *********** 2025-06-22 20:19:02.339554 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:19:02.339562 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:19:02.339570 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:19:02.339578 | orchestrator | 2025-06-22 20:19:02.339586 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2025-06-22 20:19:02.339600 | orchestrator | Sunday 22 June 2025 20:15:07 +0000 (0:00:00.277) 0:04:43.556 *********** 2025-06-22 20:19:02.339608 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:19:02.339616 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:19:02.339624 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:19:02.339652 | orchestrator | 2025-06-22 20:19:02.339660 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2025-06-22 20:19:02.339668 | orchestrator | Sunday 22 June 2025 20:15:08 +0000 (0:00:00.545) 0:04:44.101 *********** 2025-06-22 20:19:02.339676 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:19:02.339684 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:19:02.339692 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:19:02.339700 | orchestrator | 2025-06-22 20:19:02.339708 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2025-06-22 20:19:02.339720 | orchestrator | Sunday 22 June 2025 20:15:09 +0000 (0:00:01.178) 0:04:45.280 *********** 2025-06-22 20:19:02.339730 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-06-22 20:19:02.339738 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-06-22 20:19:02.339746 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-06-22 20:19:02.339755 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-06-22 20:19:02.339763 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-06-22 20:19:02.339771 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-06-22 20:19:02.339779 | orchestrator | 2025-06-22 20:19:02.339787 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2025-06-22 20:19:02.339795 | orchestrator | Sunday 22 June 2025 20:15:12 +0000 (0:00:02.949) 0:04:48.230 *********** 2025-06-22 20:19:02.339803 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-22 20:19:02.339811 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-22 20:19:02.339819 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-22 20:19:02.339827 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-06-22 20:19:02.339834 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:19:02.339841 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-06-22 20:19:02.339848 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:19:02.339855 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-06-22 20:19:02.339861 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:19:02.339868 | orchestrator | 2025-06-22 20:19:02.339875 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-06-22 20:19:02.339882 | orchestrator | Sunday 22 June 2025 20:15:15 +0000 (0:00:03.052) 0:04:51.282 *********** 2025-06-22 20:19:02.339888 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:19:02.339895 | orchestrator | 2025-06-22 20:19:02.339902 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-06-22 20:19:02.339909 | orchestrator | Sunday 22 June 2025 20:15:15 +0000 (0:00:00.172) 0:04:51.455 *********** 2025-06-22 20:19:02.339916 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:19:02.339922 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:19:02.339929 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:19:02.339936 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:19:02.339943 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:02.339949 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:02.339956 | orchestrator | 2025-06-22 20:19:02.339963 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-06-22 20:19:02.339979 | orchestrator | Sunday 22 June 2025 20:15:16 +0000 (0:00:00.782) 0:04:52.238 *********** 2025-06-22 20:19:02.339986 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-06-22 20:19:02.339993 | orchestrator | 2025-06-22 20:19:02.340000 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-06-22 20:19:02.340007 | orchestrator | Sunday 22 June 2025 20:15:17 +0000 (0:00:00.746) 0:04:52.984 *********** 2025-06-22 20:19:02.340013 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:19:02.340020 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:19:02.340027 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:19:02.340033 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:19:02.340040 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:02.340047 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:02.340053 | orchestrator | 2025-06-22 20:19:02.340060 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-06-22 20:19:02.340066 | orchestrator | Sunday 22 June 2025 20:15:17 +0000 (0:00:00.557) 0:04:53.542 *********** 2025-06-22 20:19:02.340074 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-22 20:19:02.340084 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-22 20:19:02.340092 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-22 20:19:02.340099 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-22 20:19:02.340116 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-22 20:19:02.340123 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-22 20:19:02.340130 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-22 20:19:02.340141 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-22 20:19:02.340148 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-22 20:19:02.340155 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:19:02.340167 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:19:02.340178 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:19:02.340185 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-22 20:19:02.340196 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-22 20:19:02.340203 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-22 20:19:02.340210 | orchestrator | 2025-06-22 20:19:02.340217 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-06-22 20:19:02.340224 | orchestrator | Sunday 22 June 2025 20:15:21 +0000 (0:00:03.943) 0:04:57.486 *********** 2025-06-22 20:19:02.340235 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-22 20:19:02.340246 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-22 20:19:02.340253 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-22 20:19:02.340266 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-22 20:19:02.340273 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-22 20:19:02.340280 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-22 20:19:02.340296 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-22 20:19:02.340303 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-22 20:19:02.340310 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-22 20:19:02.340320 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-22 20:19:02.340328 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-22 20:19:02.340340 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-22 20:19:02.340351 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:19:02.340359 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:19:02.340366 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:19:02.340373 | orchestrator | 2025-06-22 20:19:02.340380 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-06-22 20:19:02.340387 | orchestrator | Sunday 22 June 2025 20:15:27 +0000 (0:00:06.086) 0:05:03.572 *********** 2025-06-22 20:19:02.340393 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:19:02.340400 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:19:02.340407 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:19:02.340414 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:19:02.340420 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:02.340427 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:02.340434 | orchestrator | 2025-06-22 20:19:02.340440 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-06-22 20:19:02.340450 | orchestrator | Sunday 22 June 2025 20:15:29 +0000 (0:00:01.523) 0:05:05.096 *********** 2025-06-22 20:19:02.340470 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-06-22 20:19:02.340477 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-06-22 20:19:02.340484 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-06-22 20:19:02.340491 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-06-22 20:19:02.340503 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-06-22 20:19:02.340509 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-06-22 20:19:02.340516 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:19:02.340523 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-06-22 20:19:02.340530 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-06-22 20:19:02.340537 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:02.340543 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-06-22 20:19:02.340550 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:02.340557 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-06-22 20:19:02.340563 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-06-22 20:19:02.340570 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-06-22 20:19:02.340577 | orchestrator | 2025-06-22 20:19:02.340583 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-06-22 20:19:02.340590 | orchestrator | Sunday 22 June 2025 20:15:33 +0000 (0:00:03.793) 0:05:08.889 *********** 2025-06-22 20:19:02.340597 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:19:02.340603 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:19:02.340610 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:19:02.340617 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:19:02.340623 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:02.340630 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:02.340637 | orchestrator | 2025-06-22 20:19:02.340643 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-06-22 20:19:02.340650 | orchestrator | Sunday 22 June 2025 20:15:33 +0000 (0:00:00.681) 0:05:09.570 *********** 2025-06-22 20:19:02.340657 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-06-22 20:19:02.340664 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-06-22 20:19:02.340717 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-06-22 20:19:02.340726 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-06-22 20:19:02.340732 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-06-22 20:19:02.340739 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-06-22 20:19:02.340746 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-06-22 20:19:02.340752 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-06-22 20:19:02.340759 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-06-22 20:19:02.340766 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-06-22 20:19:02.340772 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:19:02.340779 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-06-22 20:19:02.340786 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:02.340792 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-06-22 20:19:02.340805 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:02.340811 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-06-22 20:19:02.340818 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-06-22 20:19:02.340825 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-06-22 20:19:02.340832 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-06-22 20:19:02.340838 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-06-22 20:19:02.340849 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-06-22 20:19:02.340855 | orchestrator | 2025-06-22 20:19:02.340862 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-06-22 20:19:02.340869 | orchestrator | Sunday 22 June 2025 20:15:38 +0000 (0:00:04.680) 0:05:14.250 *********** 2025-06-22 20:19:02.340876 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-06-22 20:19:02.340883 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-06-22 20:19:02.340889 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-06-22 20:19:02.340896 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-06-22 20:19:02.340903 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-06-22 20:19:02.340910 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-22 20:19:02.340916 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-22 20:19:02.340923 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-06-22 20:19:02.340929 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-06-22 20:19:02.340936 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-22 20:19:02.340943 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-22 20:19:02.340949 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-06-22 20:19:02.340956 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-06-22 20:19:02.340963 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:19:02.340970 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-06-22 20:19:02.340976 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:02.340983 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-22 20:19:02.340989 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-06-22 20:19:02.340996 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:02.341003 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-22 20:19:02.341009 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-06-22 20:19:02.341016 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-22 20:19:02.341023 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-22 20:19:02.341033 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-06-22 20:19:02.341040 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-22 20:19:02.341051 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-22 20:19:02.341057 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-06-22 20:19:02.341064 | orchestrator | 2025-06-22 20:19:02.341071 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-06-22 20:19:02.341078 | orchestrator | Sunday 22 June 2025 20:15:45 +0000 (0:00:07.054) 0:05:21.305 *********** 2025-06-22 20:19:02.341084 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:19:02.341091 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:19:02.341098 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:19:02.341104 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:19:02.341111 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:02.341117 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:02.341124 | orchestrator | 2025-06-22 20:19:02.341131 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-06-22 20:19:02.341138 | orchestrator | Sunday 22 June 2025 20:15:46 +0000 (0:00:00.566) 0:05:21.872 *********** 2025-06-22 20:19:02.341144 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:19:02.341151 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:19:02.341158 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:19:02.341164 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:19:02.341171 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:02.341178 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:02.341184 | orchestrator | 2025-06-22 20:19:02.341191 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-06-22 20:19:02.341198 | orchestrator | Sunday 22 June 2025 20:15:46 +0000 (0:00:00.777) 0:05:22.650 *********** 2025-06-22 20:19:02.341204 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:19:02.341211 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:02.341218 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:19:02.341224 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:02.341231 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:19:02.341237 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:19:02.341244 | orchestrator | 2025-06-22 20:19:02.341251 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-06-22 20:19:02.341257 | orchestrator | Sunday 22 June 2025 20:15:48 +0000 (0:00:02.004) 0:05:24.654 *********** 2025-06-22 20:19:02.341268 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-22 20:19:02.341275 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-22 20:19:02.341283 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-22 20:19:02.341295 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:19:02.341306 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-22 20:19:02.341314 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-06-22 20:19:02.341324 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-22 20:19:02.341331 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-06-22 20:19:02.341339 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-22 20:19:02.341355 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-06-22 20:19:02.341363 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:19:02.341369 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:19:02.341377 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-22 20:19:02.341384 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-22 20:19:02.341391 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:19:02.341401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-22 20:19:02.341409 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-06-22 20:19:02.341420 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-22 20:19:02.341430 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-06-22 20:19:02.341437 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:02.341444 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:02.341466 | orchestrator | 2025-06-22 20:19:02.341473 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-06-22 20:19:02.341480 | orchestrator | Sunday 22 June 2025 20:15:50 +0000 (0:00:01.571) 0:05:26.226 *********** 2025-06-22 20:19:02.341486 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-06-22 20:19:02.341493 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-06-22 20:19:02.341500 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:19:02.341506 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-06-22 20:19:02.341513 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-06-22 20:19:02.341520 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:19:02.341526 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-06-22 20:19:02.341533 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-06-22 20:19:02.341540 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:19:02.341546 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-06-22 20:19:02.341553 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-06-22 20:19:02.341560 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:19:02.341567 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-06-22 20:19:02.341573 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-06-22 20:19:02.341580 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:02.341587 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-06-22 20:19:02.341593 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-06-22 20:19:02.341600 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:02.341607 | orchestrator | 2025-06-22 20:19:02.341613 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-06-22 20:19:02.341620 | orchestrator | Sunday 22 June 2025 20:15:50 +0000 (0:00:00.614) 0:05:26.840 *********** 2025-06-22 20:19:02.341633 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-22 20:19:02.341646 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-22 20:19:02.341657 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-06-22 20:19:02.341665 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-22 20:19:02.341672 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-22 20:19:02.341679 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-22 20:19:02.341693 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-06-22 20:19:02.341701 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-22 20:19:02.341708 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-06-22 20:19:02.341718 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:19:02.341726 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:19:02.341733 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-22 20:19:02.341743 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-22 20:19:02.341755 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-06-22 20:19:02.341762 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-06-22 20:19:02.341769 | orchestrator | 2025-06-22 20:19:02.341776 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-06-22 20:19:02.341783 | orchestrator | Sunday 22 June 2025 20:15:53 +0000 (0:00:02.846) 0:05:29.687 *********** 2025-06-22 20:19:02.341790 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:19:02.341797 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:19:02.341803 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:19:02.341813 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:19:02.341821 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:02.341827 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:02.341834 | orchestrator | 2025-06-22 20:19:02.341841 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-22 20:19:02.341848 | orchestrator | Sunday 22 June 2025 20:15:54 +0000 (0:00:00.576) 0:05:30.263 *********** 2025-06-22 20:19:02.341854 | orchestrator | 2025-06-22 20:19:02.341861 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-22 20:19:02.341868 | orchestrator | Sunday 22 June 2025 20:15:54 +0000 (0:00:00.304) 0:05:30.568 *********** 2025-06-22 20:19:02.341875 | orchestrator | 2025-06-22 20:19:02.341881 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-22 20:19:02.341888 | orchestrator | Sunday 22 June 2025 20:15:54 +0000 (0:00:00.128) 0:05:30.696 *********** 2025-06-22 20:19:02.341895 | orchestrator | 2025-06-22 20:19:02.341902 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-22 20:19:02.341908 | orchestrator | Sunday 22 June 2025 20:15:54 +0000 (0:00:00.131) 0:05:30.828 *********** 2025-06-22 20:19:02.341915 | orchestrator | 2025-06-22 20:19:02.341922 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-22 20:19:02.341929 | orchestrator | Sunday 22 June 2025 20:15:55 +0000 (0:00:00.129) 0:05:30.957 *********** 2025-06-22 20:19:02.341941 | orchestrator | 2025-06-22 20:19:02.341947 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-06-22 20:19:02.341954 | orchestrator | Sunday 22 June 2025 20:15:55 +0000 (0:00:00.132) 0:05:31.090 *********** 2025-06-22 20:19:02.341961 | orchestrator | 2025-06-22 20:19:02.341968 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-06-22 20:19:02.341974 | orchestrator | Sunday 22 June 2025 20:15:55 +0000 (0:00:00.141) 0:05:31.232 *********** 2025-06-22 20:19:02.341981 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:19:02.341988 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:19:02.341995 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:19:02.342001 | orchestrator | 2025-06-22 20:19:02.342008 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-06-22 20:19:02.342037 | orchestrator | Sunday 22 June 2025 20:16:07 +0000 (0:00:11.861) 0:05:43.094 *********** 2025-06-22 20:19:02.342046 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:19:02.342053 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:19:02.342059 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:19:02.342066 | orchestrator | 2025-06-22 20:19:02.342073 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2025-06-22 20:19:02.342080 | orchestrator | Sunday 22 June 2025 20:16:18 +0000 (0:00:11.227) 0:05:54.321 *********** 2025-06-22 20:19:02.342087 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:19:02.342093 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:19:02.342100 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:19:02.342107 | orchestrator | 2025-06-22 20:19:02.342117 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2025-06-22 20:19:02.342124 | orchestrator | Sunday 22 June 2025 20:16:43 +0000 (0:00:25.256) 0:06:19.578 *********** 2025-06-22 20:19:02.342131 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:19:02.342138 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:19:02.342144 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:19:02.342151 | orchestrator | 2025-06-22 20:19:02.342158 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2025-06-22 20:19:02.342165 | orchestrator | Sunday 22 June 2025 20:17:25 +0000 (0:00:41.517) 0:07:01.095 *********** 2025-06-22 20:19:02.342171 | orchestrator | FAILED - RETRYING: [testbed-node-3]: Checking libvirt container is ready (10 retries left). 2025-06-22 20:19:02.342178 | orchestrator | FAILED - RETRYING: [testbed-node-4]: Checking libvirt container is ready (10 retries left). 2025-06-22 20:19:02.342185 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:19:02.342192 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:19:02.342199 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:19:02.342205 | orchestrator | 2025-06-22 20:19:02.342212 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2025-06-22 20:19:02.342219 | orchestrator | Sunday 22 June 2025 20:17:31 +0000 (0:00:06.131) 0:07:07.227 *********** 2025-06-22 20:19:02.342226 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:19:02.342232 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:19:02.342239 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:19:02.342246 | orchestrator | 2025-06-22 20:19:02.342253 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2025-06-22 20:19:02.342259 | orchestrator | Sunday 22 June 2025 20:17:32 +0000 (0:00:00.750) 0:07:07.978 *********** 2025-06-22 20:19:02.342266 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:19:02.342273 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:19:02.342279 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:19:02.342286 | orchestrator | 2025-06-22 20:19:02.342293 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-06-22 20:19:02.342300 | orchestrator | Sunday 22 June 2025 20:17:55 +0000 (0:00:22.952) 0:07:30.930 *********** 2025-06-22 20:19:02.342306 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:19:02.342313 | orchestrator | 2025-06-22 20:19:02.342320 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-06-22 20:19:02.342331 | orchestrator | Sunday 22 June 2025 20:17:55 +0000 (0:00:00.141) 0:07:31.072 *********** 2025-06-22 20:19:02.342338 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:02.342345 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:02.342352 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:19:02.342358 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:19:02.342365 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:19:02.342372 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2025-06-22 20:19:02.342379 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-22 20:19:02.342386 | orchestrator | 2025-06-22 20:19:02.342392 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-06-22 20:19:02.342403 | orchestrator | Sunday 22 June 2025 20:18:16 +0000 (0:00:21.589) 0:07:52.662 *********** 2025-06-22 20:19:02.342410 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:19:02.342417 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:19:02.342423 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:19:02.342430 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:19:02.342437 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:02.342444 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:02.342450 | orchestrator | 2025-06-22 20:19:02.342493 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2025-06-22 20:19:02.342500 | orchestrator | Sunday 22 June 2025 20:18:25 +0000 (0:00:08.688) 0:08:01.350 *********** 2025-06-22 20:19:02.342507 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:19:02.342514 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:19:02.342521 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:02.342527 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:19:02.342534 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:02.342541 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-3 2025-06-22 20:19:02.342548 | orchestrator | 2025-06-22 20:19:02.342554 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-06-22 20:19:02.342561 | orchestrator | Sunday 22 June 2025 20:18:29 +0000 (0:00:04.217) 0:08:05.568 *********** 2025-06-22 20:19:02.342568 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-22 20:19:02.342575 | orchestrator | 2025-06-22 20:19:02.342581 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-06-22 20:19:02.342588 | orchestrator | Sunday 22 June 2025 20:18:41 +0000 (0:00:11.902) 0:08:17.471 *********** 2025-06-22 20:19:02.342595 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-22 20:19:02.342602 | orchestrator | 2025-06-22 20:19:02.342608 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2025-06-22 20:19:02.342615 | orchestrator | Sunday 22 June 2025 20:18:42 +0000 (0:00:01.174) 0:08:18.646 *********** 2025-06-22 20:19:02.342622 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:19:02.342629 | orchestrator | 2025-06-22 20:19:02.342635 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2025-06-22 20:19:02.342642 | orchestrator | Sunday 22 June 2025 20:18:43 +0000 (0:00:01.206) 0:08:19.853 *********** 2025-06-22 20:19:02.342649 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-22 20:19:02.342656 | orchestrator | 2025-06-22 20:19:02.342662 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2025-06-22 20:19:02.342669 | orchestrator | Sunday 22 June 2025 20:18:54 +0000 (0:00:10.283) 0:08:30.136 *********** 2025-06-22 20:19:02.342676 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:19:02.342683 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:19:02.342690 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:19:02.342696 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:19:02.342707 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:19:02.342714 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:19:02.342721 | orchestrator | 2025-06-22 20:19:02.342733 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2025-06-22 20:19:02.342739 | orchestrator | 2025-06-22 20:19:02.342746 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2025-06-22 20:19:02.342753 | orchestrator | Sunday 22 June 2025 20:18:55 +0000 (0:00:01.491) 0:08:31.627 *********** 2025-06-22 20:19:02.342760 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:19:02.342767 | orchestrator | changed: [testbed-node-1] 2025-06-22 20:19:02.342773 | orchestrator | changed: [testbed-node-2] 2025-06-22 20:19:02.342780 | orchestrator | 2025-06-22 20:19:02.342787 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2025-06-22 20:19:02.342794 | orchestrator | 2025-06-22 20:19:02.342801 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2025-06-22 20:19:02.342807 | orchestrator | Sunday 22 June 2025 20:18:56 +0000 (0:00:00.946) 0:08:32.574 *********** 2025-06-22 20:19:02.342814 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:19:02.342821 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:02.342827 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:02.342834 | orchestrator | 2025-06-22 20:19:02.342841 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2025-06-22 20:19:02.342848 | orchestrator | 2025-06-22 20:19:02.342855 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2025-06-22 20:19:02.342861 | orchestrator | Sunday 22 June 2025 20:18:57 +0000 (0:00:00.456) 0:08:33.030 *********** 2025-06-22 20:19:02.342868 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2025-06-22 20:19:02.342875 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-06-22 20:19:02.342882 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-06-22 20:19:02.342889 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2025-06-22 20:19:02.342895 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2025-06-22 20:19:02.342902 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2025-06-22 20:19:02.342909 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:19:02.342916 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2025-06-22 20:19:02.342922 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-06-22 20:19:02.342929 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-06-22 20:19:02.342936 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2025-06-22 20:19:02.342943 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2025-06-22 20:19:02.342950 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2025-06-22 20:19:02.342956 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:19:02.342963 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2025-06-22 20:19:02.342970 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-06-22 20:19:02.342977 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-06-22 20:19:02.342983 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2025-06-22 20:19:02.342992 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2025-06-22 20:19:02.342999 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2025-06-22 20:19:02.343005 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:19:02.343011 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2025-06-22 20:19:02.343018 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-06-22 20:19:02.343024 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-06-22 20:19:02.343030 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2025-06-22 20:19:02.343037 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2025-06-22 20:19:02.343043 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2025-06-22 20:19:02.343049 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:19:02.343060 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2025-06-22 20:19:02.343067 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-06-22 20:19:02.343073 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-06-22 20:19:02.343079 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2025-06-22 20:19:02.343086 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2025-06-22 20:19:02.343092 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2025-06-22 20:19:02.343098 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:02.343105 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2025-06-22 20:19:02.343111 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-06-22 20:19:02.343117 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-06-22 20:19:02.343123 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2025-06-22 20:19:02.343130 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2025-06-22 20:19:02.343136 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2025-06-22 20:19:02.343142 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:02.343149 | orchestrator | 2025-06-22 20:19:02.343155 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2025-06-22 20:19:02.343161 | orchestrator | 2025-06-22 20:19:02.343167 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2025-06-22 20:19:02.343174 | orchestrator | Sunday 22 June 2025 20:18:58 +0000 (0:00:01.093) 0:08:34.124 *********** 2025-06-22 20:19:02.343180 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2025-06-22 20:19:02.343186 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2025-06-22 20:19:02.343192 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:19:02.343199 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2025-06-22 20:19:02.343208 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2025-06-22 20:19:02.343214 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:02.343220 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2025-06-22 20:19:02.343227 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2025-06-22 20:19:02.343233 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:02.343239 | orchestrator | 2025-06-22 20:19:02.343245 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2025-06-22 20:19:02.343252 | orchestrator | 2025-06-22 20:19:02.343258 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2025-06-22 20:19:02.343264 | orchestrator | Sunday 22 June 2025 20:18:58 +0000 (0:00:00.582) 0:08:34.706 *********** 2025-06-22 20:19:02.343271 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:19:02.343277 | orchestrator | 2025-06-22 20:19:02.343283 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2025-06-22 20:19:02.343290 | orchestrator | 2025-06-22 20:19:02.343296 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2025-06-22 20:19:02.343302 | orchestrator | Sunday 22 June 2025 20:18:59 +0000 (0:00:00.597) 0:08:35.304 *********** 2025-06-22 20:19:02.343309 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:19:02.343315 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:19:02.343321 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:19:02.343327 | orchestrator | 2025-06-22 20:19:02.343334 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 20:19:02.343340 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 20:19:02.343346 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2025-06-22 20:19:02.343353 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-06-22 20:19:02.343364 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-06-22 20:19:02.343371 | orchestrator | testbed-node-3 : ok=43  changed=27  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-06-22 20:19:02.343377 | orchestrator | testbed-node-4 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-06-22 20:19:02.343383 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-06-22 20:19:02.343389 | orchestrator | 2025-06-22 20:19:02.343396 | orchestrator | 2025-06-22 20:19:02.343402 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 20:19:02.343412 | orchestrator | Sunday 22 June 2025 20:18:59 +0000 (0:00:00.387) 0:08:35.691 *********** 2025-06-22 20:19:02.343418 | orchestrator | =============================================================================== 2025-06-22 20:19:02.343424 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 41.52s 2025-06-22 20:19:02.343431 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 31.71s 2025-06-22 20:19:02.343437 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 25.26s 2025-06-22 20:19:02.343443 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 25.09s 2025-06-22 20:19:02.343450 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 22.95s 2025-06-22 20:19:02.343471 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 21.59s 2025-06-22 20:19:02.343477 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 20.74s 2025-06-22 20:19:02.343483 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 17.85s 2025-06-22 20:19:02.343489 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 14.88s 2025-06-22 20:19:02.343496 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.06s 2025-06-22 20:19:02.343502 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.90s 2025-06-22 20:19:02.343508 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 11.86s 2025-06-22 20:19:02.343514 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.33s 2025-06-22 20:19:02.343520 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 11.23s 2025-06-22 20:19:02.343527 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 10.28s 2025-06-22 20:19:02.343533 | orchestrator | nova-cell : Create cell ------------------------------------------------ 10.08s 2025-06-22 20:19:02.343539 | orchestrator | nova : Restart nova-api container --------------------------------------- 9.85s 2025-06-22 20:19:02.343545 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 9.81s 2025-06-22 20:19:02.343552 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 8.69s 2025-06-22 20:19:02.343558 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 7.55s 2025-06-22 20:19:02.343564 | orchestrator | 2025-06-22 20:19:02 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-22 20:19:05.377769 | orchestrator | 2025-06-22 20:19:05 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-22 20:19:08.414588 | orchestrator | 2025-06-22 20:19:08 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-22 20:19:11.456934 | orchestrator | 2025-06-22 20:19:11 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-22 20:19:14.503051 | orchestrator | 2025-06-22 20:19:14 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-22 20:19:17.543691 | orchestrator | 2025-06-22 20:19:17 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-22 20:19:20.585597 | orchestrator | 2025-06-22 20:19:20 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-22 20:19:23.627908 | orchestrator | 2025-06-22 20:19:23 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-22 20:19:26.672289 | orchestrator | 2025-06-22 20:19:26 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-22 20:19:29.715091 | orchestrator | 2025-06-22 20:19:29 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-22 20:19:32.756010 | orchestrator | 2025-06-22 20:19:32 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-22 20:19:35.807826 | orchestrator | 2025-06-22 20:19:35 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-22 20:19:38.848349 | orchestrator | 2025-06-22 20:19:38 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-22 20:19:41.888494 | orchestrator | 2025-06-22 20:19:41 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-22 20:19:44.929868 | orchestrator | 2025-06-22 20:19:44 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-22 20:19:47.976863 | orchestrator | 2025-06-22 20:19:47 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-22 20:19:51.022786 | orchestrator | 2025-06-22 20:19:51 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-22 20:19:54.059653 | orchestrator | 2025-06-22 20:19:54 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-22 20:19:57.097829 | orchestrator | 2025-06-22 20:19:57 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-22 20:20:00.141504 | orchestrator | 2025-06-22 20:20:00 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-06-22 20:20:03.187189 | orchestrator | 2025-06-22 20:20:03.438396 | orchestrator | 2025-06-22 20:20:03.442081 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Sun Jun 22 20:20:03 UTC 2025 2025-06-22 20:20:03.442115 | orchestrator | 2025-06-22 20:20:03.762367 | orchestrator | ok: Runtime: 0:35:48.664218 2025-06-22 20:20:03.976109 | 2025-06-22 20:20:03.976215 | TASK [Bootstrap services] 2025-06-22 20:20:04.733218 | orchestrator | 2025-06-22 20:20:04.733509 | orchestrator | # BOOTSTRAP 2025-06-22 20:20:04.733557 | orchestrator | 2025-06-22 20:20:04.733582 | orchestrator | + set -e 2025-06-22 20:20:04.733605 | orchestrator | + echo 2025-06-22 20:20:04.733628 | orchestrator | + echo '# BOOTSTRAP' 2025-06-22 20:20:04.733657 | orchestrator | + echo 2025-06-22 20:20:04.733722 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2025-06-22 20:20:04.741979 | orchestrator | + set -e 2025-06-22 20:20:04.742052 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2025-06-22 20:20:08.705565 | orchestrator | 2025-06-22 20:20:08 | INFO  | It takes a moment until task cc51a51a-ee55-4c5c-85ca-bcfe4060e37a (flavor-manager) has been started and output is visible here. 2025-06-22 20:20:12.471131 | orchestrator | 2025-06-22 20:20:12 | INFO  | Flavor SCS-1V-4 created 2025-06-22 20:20:12.808396 | orchestrator | 2025-06-22 20:20:12 | INFO  | Flavor SCS-2V-8 created 2025-06-22 20:20:13.217860 | orchestrator | 2025-06-22 20:20:13 | INFO  | Flavor SCS-4V-16 created 2025-06-22 20:20:13.368214 | orchestrator | 2025-06-22 20:20:13 | INFO  | Flavor SCS-8V-32 created 2025-06-22 20:20:13.493274 | orchestrator | 2025-06-22 20:20:13 | INFO  | Flavor SCS-1V-2 created 2025-06-22 20:20:13.635631 | orchestrator | 2025-06-22 20:20:13 | INFO  | Flavor SCS-2V-4 created 2025-06-22 20:20:13.766975 | orchestrator | 2025-06-22 20:20:13 | INFO  | Flavor SCS-4V-8 created 2025-06-22 20:20:13.892689 | orchestrator | 2025-06-22 20:20:13 | INFO  | Flavor SCS-8V-16 created 2025-06-22 20:20:14.056144 | orchestrator | 2025-06-22 20:20:14 | INFO  | Flavor SCS-16V-32 created 2025-06-22 20:20:14.166130 | orchestrator | 2025-06-22 20:20:14 | INFO  | Flavor SCS-1V-8 created 2025-06-22 20:20:14.295731 | orchestrator | 2025-06-22 20:20:14 | INFO  | Flavor SCS-2V-16 created 2025-06-22 20:20:14.422125 | orchestrator | 2025-06-22 20:20:14 | INFO  | Flavor SCS-4V-32 created 2025-06-22 20:20:14.560592 | orchestrator | 2025-06-22 20:20:14 | INFO  | Flavor SCS-1L-1 created 2025-06-22 20:20:14.671851 | orchestrator | 2025-06-22 20:20:14 | INFO  | Flavor SCS-2V-4-20s created 2025-06-22 20:20:14.805208 | orchestrator | 2025-06-22 20:20:14 | INFO  | Flavor SCS-4V-16-100s created 2025-06-22 20:20:14.957858 | orchestrator | 2025-06-22 20:20:14 | INFO  | Flavor SCS-1V-4-10 created 2025-06-22 20:20:15.076208 | orchestrator | 2025-06-22 20:20:15 | INFO  | Flavor SCS-2V-8-20 created 2025-06-22 20:20:15.225721 | orchestrator | 2025-06-22 20:20:15 | INFO  | Flavor SCS-4V-16-50 created 2025-06-22 20:20:15.361724 | orchestrator | 2025-06-22 20:20:15 | INFO  | Flavor SCS-8V-32-100 created 2025-06-22 20:20:15.472491 | orchestrator | 2025-06-22 20:20:15 | INFO  | Flavor SCS-1V-2-5 created 2025-06-22 20:20:15.583460 | orchestrator | 2025-06-22 20:20:15 | INFO  | Flavor SCS-2V-4-10 created 2025-06-22 20:20:15.708600 | orchestrator | 2025-06-22 20:20:15 | INFO  | Flavor SCS-4V-8-20 created 2025-06-22 20:20:15.826293 | orchestrator | 2025-06-22 20:20:15 | INFO  | Flavor SCS-8V-16-50 created 2025-06-22 20:20:15.960213 | orchestrator | 2025-06-22 20:20:15 | INFO  | Flavor SCS-16V-32-100 created 2025-06-22 20:20:16.105196 | orchestrator | 2025-06-22 20:20:16 | INFO  | Flavor SCS-1V-8-20 created 2025-06-22 20:20:16.258078 | orchestrator | 2025-06-22 20:20:16 | INFO  | Flavor SCS-2V-16-50 created 2025-06-22 20:20:16.401008 | orchestrator | 2025-06-22 20:20:16 | INFO  | Flavor SCS-4V-32-100 created 2025-06-22 20:20:16.542318 | orchestrator | 2025-06-22 20:20:16 | INFO  | Flavor SCS-1L-1-5 created 2025-06-22 20:20:18.737141 | orchestrator | 2025-06-22 20:20:18 | INFO  | Trying to run play bootstrap-basic in environment openstack 2025-06-22 20:20:18.742222 | orchestrator | Registering Redlock._acquired_script 2025-06-22 20:20:18.742288 | orchestrator | Registering Redlock._extend_script 2025-06-22 20:20:18.742334 | orchestrator | Registering Redlock._release_script 2025-06-22 20:20:18.802633 | orchestrator | 2025-06-22 20:20:18 | INFO  | Task dd07298a-7e15-4303-b677-a375835e7f39 (bootstrap-basic) was prepared for execution. 2025-06-22 20:20:18.802722 | orchestrator | 2025-06-22 20:20:18 | INFO  | It takes a moment until task dd07298a-7e15-4303-b677-a375835e7f39 (bootstrap-basic) has been started and output is visible here. 2025-06-22 20:20:22.930717 | orchestrator | 2025-06-22 20:20:22.934315 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2025-06-22 20:20:22.935873 | orchestrator | 2025-06-22 20:20:22.938752 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-06-22 20:20:22.939376 | orchestrator | Sunday 22 June 2025 20:20:22 +0000 (0:00:00.080) 0:00:00.080 *********** 2025-06-22 20:20:24.760916 | orchestrator | ok: [localhost] 2025-06-22 20:20:24.761053 | orchestrator | 2025-06-22 20:20:24.761081 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2025-06-22 20:20:24.761103 | orchestrator | Sunday 22 June 2025 20:20:24 +0000 (0:00:01.832) 0:00:01.912 *********** 2025-06-22 20:20:32.641922 | orchestrator | ok: [localhost] 2025-06-22 20:20:32.642694 | orchestrator | 2025-06-22 20:20:32.643345 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2025-06-22 20:20:32.644617 | orchestrator | Sunday 22 June 2025 20:20:32 +0000 (0:00:07.883) 0:00:09.796 *********** 2025-06-22 20:20:39.901534 | orchestrator | changed: [localhost] 2025-06-22 20:20:39.902481 | orchestrator | 2025-06-22 20:20:39.902997 | orchestrator | TASK [Get volume type local] *************************************************** 2025-06-22 20:20:39.903900 | orchestrator | Sunday 22 June 2025 20:20:39 +0000 (0:00:07.258) 0:00:17.054 *********** 2025-06-22 20:20:46.443243 | orchestrator | ok: [localhost] 2025-06-22 20:20:46.443485 | orchestrator | 2025-06-22 20:20:46.443634 | orchestrator | TASK [Create volume type local] ************************************************ 2025-06-22 20:20:46.444481 | orchestrator | Sunday 22 June 2025 20:20:46 +0000 (0:00:06.542) 0:00:23.597 *********** 2025-06-22 20:20:53.427344 | orchestrator | changed: [localhost] 2025-06-22 20:20:53.429929 | orchestrator | 2025-06-22 20:20:53.431070 | orchestrator | TASK [Create public network] *************************************************** 2025-06-22 20:20:53.432918 | orchestrator | Sunday 22 June 2025 20:20:53 +0000 (0:00:06.982) 0:00:30.580 *********** 2025-06-22 20:21:00.509703 | orchestrator | changed: [localhost] 2025-06-22 20:21:00.509943 | orchestrator | 2025-06-22 20:21:00.510761 | orchestrator | TASK [Set public network to default] ******************************************* 2025-06-22 20:21:00.511425 | orchestrator | Sunday 22 June 2025 20:21:00 +0000 (0:00:07.084) 0:00:37.665 *********** 2025-06-22 20:21:06.610006 | orchestrator | changed: [localhost] 2025-06-22 20:21:06.611536 | orchestrator | 2025-06-22 20:21:06.612797 | orchestrator | TASK [Create public subnet] **************************************************** 2025-06-22 20:21:06.613444 | orchestrator | Sunday 22 June 2025 20:21:06 +0000 (0:00:06.099) 0:00:43.764 *********** 2025-06-22 20:21:10.975453 | orchestrator | changed: [localhost] 2025-06-22 20:21:10.978203 | orchestrator | 2025-06-22 20:21:10.978280 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2025-06-22 20:21:10.978924 | orchestrator | Sunday 22 June 2025 20:21:10 +0000 (0:00:04.366) 0:00:48.130 *********** 2025-06-22 20:21:15.431512 | orchestrator | changed: [localhost] 2025-06-22 20:21:15.431878 | orchestrator | 2025-06-22 20:21:15.432905 | orchestrator | TASK [Create manager role] ***************************************************** 2025-06-22 20:21:15.433247 | orchestrator | Sunday 22 June 2025 20:21:15 +0000 (0:00:04.456) 0:00:52.586 *********** 2025-06-22 20:21:18.911556 | orchestrator | ok: [localhost] 2025-06-22 20:21:18.912244 | orchestrator | 2025-06-22 20:21:18.913424 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 20:21:18.913652 | orchestrator | 2025-06-22 20:21:18 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-22 20:21:18.913896 | orchestrator | 2025-06-22 20:21:18 | INFO  | Please wait and do not abort execution. 2025-06-22 20:21:18.915735 | orchestrator | localhost : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 20:21:18.916471 | orchestrator | 2025-06-22 20:21:18.917173 | orchestrator | 2025-06-22 20:21:18.918159 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 20:21:18.918588 | orchestrator | Sunday 22 June 2025 20:21:18 +0000 (0:00:03.478) 0:00:56.065 *********** 2025-06-22 20:21:18.919308 | orchestrator | =============================================================================== 2025-06-22 20:21:18.919754 | orchestrator | Get volume type LUKS ---------------------------------------------------- 7.88s 2025-06-22 20:21:18.920523 | orchestrator | Create volume type LUKS ------------------------------------------------- 7.26s 2025-06-22 20:21:18.921213 | orchestrator | Create public network --------------------------------------------------- 7.08s 2025-06-22 20:21:18.921699 | orchestrator | Create volume type local ------------------------------------------------ 6.98s 2025-06-22 20:21:18.922200 | orchestrator | Get volume type local --------------------------------------------------- 6.54s 2025-06-22 20:21:18.923610 | orchestrator | Set public network to default ------------------------------------------- 6.10s 2025-06-22 20:21:18.924138 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 4.46s 2025-06-22 20:21:18.924592 | orchestrator | Create public subnet ---------------------------------------------------- 4.37s 2025-06-22 20:21:18.925235 | orchestrator | Create manager role ----------------------------------------------------- 3.48s 2025-06-22 20:21:18.925810 | orchestrator | Gathering Facts --------------------------------------------------------- 1.83s 2025-06-22 20:21:21.210825 | orchestrator | 2025-06-22 20:21:21 | INFO  | It takes a moment until task 9cd7f4a0-8444-4e29-9334-ff81de10361f (image-manager) has been started and output is visible here. 2025-06-22 20:21:24.599550 | orchestrator | 2025-06-22 20:21:24 | INFO  | Processing image 'Cirros 0.6.2' 2025-06-22 20:21:24.825628 | orchestrator | 2025-06-22 20:21:24 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2025-06-22 20:21:24.826432 | orchestrator | 2025-06-22 20:21:24 | INFO  | Importing image Cirros 0.6.2 2025-06-22 20:21:24.826985 | orchestrator | 2025-06-22 20:21:24 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-06-22 20:21:26.529840 | orchestrator | 2025-06-22 20:21:26 | INFO  | Waiting for image to leave queued state... 2025-06-22 20:21:28.587547 | orchestrator | 2025-06-22 20:21:28 | INFO  | Waiting for import to complete... 2025-06-22 20:21:38.713559 | orchestrator | 2025-06-22 20:21:38 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2025-06-22 20:21:38.927745 | orchestrator | 2025-06-22 20:21:38 | INFO  | Checking parameters of 'Cirros 0.6.2' 2025-06-22 20:21:38.928692 | orchestrator | 2025-06-22 20:21:38 | INFO  | Setting internal_version = 0.6.2 2025-06-22 20:21:38.929531 | orchestrator | 2025-06-22 20:21:38 | INFO  | Setting image_original_user = cirros 2025-06-22 20:21:38.930794 | orchestrator | 2025-06-22 20:21:38 | INFO  | Adding tag os:cirros 2025-06-22 20:21:39.204592 | orchestrator | 2025-06-22 20:21:39 | INFO  | Setting property architecture: x86_64 2025-06-22 20:21:39.420096 | orchestrator | 2025-06-22 20:21:39 | INFO  | Setting property hw_disk_bus: scsi 2025-06-22 20:21:39.636948 | orchestrator | 2025-06-22 20:21:39 | INFO  | Setting property hw_rng_model: virtio 2025-06-22 20:21:39.883239 | orchestrator | 2025-06-22 20:21:39 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-06-22 20:21:40.111742 | orchestrator | 2025-06-22 20:21:40 | INFO  | Setting property hw_watchdog_action: reset 2025-06-22 20:21:40.309159 | orchestrator | 2025-06-22 20:21:40 | INFO  | Setting property hypervisor_type: qemu 2025-06-22 20:21:40.493703 | orchestrator | 2025-06-22 20:21:40 | INFO  | Setting property os_distro: cirros 2025-06-22 20:21:40.702723 | orchestrator | 2025-06-22 20:21:40 | INFO  | Setting property replace_frequency: never 2025-06-22 20:21:40.915717 | orchestrator | 2025-06-22 20:21:40 | INFO  | Setting property uuid_validity: none 2025-06-22 20:21:41.151497 | orchestrator | 2025-06-22 20:21:41 | INFO  | Setting property provided_until: none 2025-06-22 20:21:41.386551 | orchestrator | 2025-06-22 20:21:41 | INFO  | Setting property image_description: Cirros 2025-06-22 20:21:41.601573 | orchestrator | 2025-06-22 20:21:41 | INFO  | Setting property image_name: Cirros 2025-06-22 20:21:41.811823 | orchestrator | 2025-06-22 20:21:41 | INFO  | Setting property internal_version: 0.6.2 2025-06-22 20:21:42.018632 | orchestrator | 2025-06-22 20:21:42 | INFO  | Setting property image_original_user: cirros 2025-06-22 20:21:42.209111 | orchestrator | 2025-06-22 20:21:42 | INFO  | Setting property os_version: 0.6.2 2025-06-22 20:21:42.395917 | orchestrator | 2025-06-22 20:21:42 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-06-22 20:21:42.607828 | orchestrator | 2025-06-22 20:21:42 | INFO  | Setting property image_build_date: 2023-05-30 2025-06-22 20:21:42.817285 | orchestrator | 2025-06-22 20:21:42 | INFO  | Checking status of 'Cirros 0.6.2' 2025-06-22 20:21:42.818122 | orchestrator | 2025-06-22 20:21:42 | INFO  | Checking visibility of 'Cirros 0.6.2' 2025-06-22 20:21:42.819262 | orchestrator | 2025-06-22 20:21:42 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2025-06-22 20:21:43.054834 | orchestrator | 2025-06-22 20:21:43 | INFO  | Processing image 'Cirros 0.6.3' 2025-06-22 20:21:43.256764 | orchestrator | 2025-06-22 20:21:43 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2025-06-22 20:21:43.257455 | orchestrator | 2025-06-22 20:21:43 | INFO  | Importing image Cirros 0.6.3 2025-06-22 20:21:43.258869 | orchestrator | 2025-06-22 20:21:43 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-06-22 20:21:43.587695 | orchestrator | 2025-06-22 20:21:43 | INFO  | Waiting for image to leave queued state... 2025-06-22 20:21:45.825475 | orchestrator | 2025-06-22 20:21:45 | INFO  | Waiting for import to complete... 2025-06-22 20:21:56.145693 | orchestrator | 2025-06-22 20:21:56 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2025-06-22 20:21:56.417967 | orchestrator | 2025-06-22 20:21:56 | INFO  | Checking parameters of 'Cirros 0.6.3' 2025-06-22 20:21:56.418505 | orchestrator | 2025-06-22 20:21:56 | INFO  | Setting internal_version = 0.6.3 2025-06-22 20:21:56.419992 | orchestrator | 2025-06-22 20:21:56 | INFO  | Setting image_original_user = cirros 2025-06-22 20:21:56.420762 | orchestrator | 2025-06-22 20:21:56 | INFO  | Adding tag os:cirros 2025-06-22 20:21:56.701096 | orchestrator | 2025-06-22 20:21:56 | INFO  | Setting property architecture: x86_64 2025-06-22 20:21:56.898647 | orchestrator | 2025-06-22 20:21:56 | INFO  | Setting property hw_disk_bus: scsi 2025-06-22 20:21:57.197666 | orchestrator | 2025-06-22 20:21:57 | INFO  | Setting property hw_rng_model: virtio 2025-06-22 20:21:57.430864 | orchestrator | 2025-06-22 20:21:57 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-06-22 20:21:57.629489 | orchestrator | 2025-06-22 20:21:57 | INFO  | Setting property hw_watchdog_action: reset 2025-06-22 20:21:57.852059 | orchestrator | 2025-06-22 20:21:57 | INFO  | Setting property hypervisor_type: qemu 2025-06-22 20:21:58.027537 | orchestrator | 2025-06-22 20:21:58 | INFO  | Setting property os_distro: cirros 2025-06-22 20:21:58.236699 | orchestrator | 2025-06-22 20:21:58 | INFO  | Setting property replace_frequency: never 2025-06-22 20:21:58.429620 | orchestrator | 2025-06-22 20:21:58 | INFO  | Setting property uuid_validity: none 2025-06-22 20:21:58.605609 | orchestrator | 2025-06-22 20:21:58 | INFO  | Setting property provided_until: none 2025-06-22 20:21:58.792542 | orchestrator | 2025-06-22 20:21:58 | INFO  | Setting property image_description: Cirros 2025-06-22 20:21:59.028493 | orchestrator | 2025-06-22 20:21:59 | INFO  | Setting property image_name: Cirros 2025-06-22 20:21:59.233568 | orchestrator | 2025-06-22 20:21:59 | INFO  | Setting property internal_version: 0.6.3 2025-06-22 20:21:59.404945 | orchestrator | 2025-06-22 20:21:59 | INFO  | Setting property image_original_user: cirros 2025-06-22 20:21:59.598450 | orchestrator | 2025-06-22 20:21:59 | INFO  | Setting property os_version: 0.6.3 2025-06-22 20:21:59.795095 | orchestrator | 2025-06-22 20:21:59 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-06-22 20:22:00.000356 | orchestrator | 2025-06-22 20:21:59 | INFO  | Setting property image_build_date: 2024-09-26 2025-06-22 20:22:00.241454 | orchestrator | 2025-06-22 20:22:00 | INFO  | Checking status of 'Cirros 0.6.3' 2025-06-22 20:22:00.242592 | orchestrator | 2025-06-22 20:22:00 | INFO  | Checking visibility of 'Cirros 0.6.3' 2025-06-22 20:22:00.242637 | orchestrator | 2025-06-22 20:22:00 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2025-06-22 20:22:01.201519 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2025-06-22 20:22:03.069861 | orchestrator | 2025-06-22 20:22:03 | INFO  | date: 2025-06-22 2025-06-22 20:22:03.069973 | orchestrator | 2025-06-22 20:22:03 | INFO  | image: octavia-amphora-haproxy-2024.2.20250622.qcow2 2025-06-22 20:22:03.069994 | orchestrator | 2025-06-22 20:22:03 | INFO  | url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250622.qcow2 2025-06-22 20:22:03.070087 | orchestrator | 2025-06-22 20:22:03 | INFO  | checksum_url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250622.qcow2.CHECKSUM 2025-06-22 20:22:03.087935 | orchestrator | 2025-06-22 20:22:03 | INFO  | checksum: 77df9fefb5aab55dc760a767e58162a9735f5740229c1da42280293548a761a7 2025-06-22 20:22:03.161303 | orchestrator | 2025-06-22 20:22:03 | INFO  | It takes a moment until task 9eefe3ab-4c78-48ba-a15c-eeaa4ca2db82 (image-manager) has been started and output is visible here. 2025-06-22 20:22:03.382337 | orchestrator | /usr/local/lib/python3.13/site-packages/openstack_image_manager/__init__.py:5: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-06-22 20:22:03.383518 | orchestrator | from pkg_resources import get_distribution, DistributionNotFound 2025-06-22 20:22:05.772839 | orchestrator | 2025-06-22 20:22:05 | INFO  | Processing image 'OpenStack Octavia Amphora 2025-06-22' 2025-06-22 20:22:05.788708 | orchestrator | 2025-06-22 20:22:05 | INFO  | Tested URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250622.qcow2: 200 2025-06-22 20:22:05.790138 | orchestrator | 2025-06-22 20:22:05 | INFO  | Importing image OpenStack Octavia Amphora 2025-06-22 2025-06-22 20:22:05.790526 | orchestrator | 2025-06-22 20:22:05 | INFO  | Importing from URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250622.qcow2 2025-06-22 20:22:06.988155 | orchestrator | 2025-06-22 20:22:06 | INFO  | Waiting for image to leave queued state... 2025-06-22 20:22:09.054328 | orchestrator | 2025-06-22 20:22:09 | INFO  | Waiting for import to complete... 2025-06-22 20:22:19.144708 | orchestrator | 2025-06-22 20:22:19 | INFO  | Waiting for import to complete... 2025-06-22 20:22:29.232949 | orchestrator | 2025-06-22 20:22:29 | INFO  | Waiting for import to complete... 2025-06-22 20:22:39.322571 | orchestrator | 2025-06-22 20:22:39 | INFO  | Waiting for import to complete... 2025-06-22 20:22:49.417876 | orchestrator | 2025-06-22 20:22:49 | INFO  | Waiting for import to complete... 2025-06-22 20:22:59.544722 | orchestrator | 2025-06-22 20:22:59 | INFO  | Import of 'OpenStack Octavia Amphora 2025-06-22' successfully completed, reloading images 2025-06-22 20:22:59.864210 | orchestrator | 2025-06-22 20:22:59 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2025-06-22' 2025-06-22 20:22:59.865054 | orchestrator | 2025-06-22 20:22:59 | INFO  | Setting internal_version = 2025-06-22 2025-06-22 20:22:59.865996 | orchestrator | 2025-06-22 20:22:59 | INFO  | Setting image_original_user = ubuntu 2025-06-22 20:22:59.867760 | orchestrator | 2025-06-22 20:22:59 | INFO  | Adding tag amphora 2025-06-22 20:23:00.077122 | orchestrator | 2025-06-22 20:23:00 | INFO  | Adding tag os:ubuntu 2025-06-22 20:23:00.310806 | orchestrator | 2025-06-22 20:23:00 | INFO  | Setting property architecture: x86_64 2025-06-22 20:23:00.589063 | orchestrator | 2025-06-22 20:23:00 | INFO  | Setting property hw_disk_bus: scsi 2025-06-22 20:23:00.775953 | orchestrator | 2025-06-22 20:23:00 | INFO  | Setting property hw_rng_model: virtio 2025-06-22 20:23:00.982776 | orchestrator | 2025-06-22 20:23:00 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-06-22 20:23:01.220560 | orchestrator | 2025-06-22 20:23:01 | INFO  | Setting property hw_watchdog_action: reset 2025-06-22 20:23:01.426260 | orchestrator | 2025-06-22 20:23:01 | INFO  | Setting property hypervisor_type: qemu 2025-06-22 20:23:01.631581 | orchestrator | 2025-06-22 20:23:01 | INFO  | Setting property os_distro: ubuntu 2025-06-22 20:23:01.830600 | orchestrator | 2025-06-22 20:23:01 | INFO  | Setting property replace_frequency: quarterly 2025-06-22 20:23:02.060616 | orchestrator | 2025-06-22 20:23:02 | INFO  | Setting property uuid_validity: last-1 2025-06-22 20:23:02.488263 | orchestrator | 2025-06-22 20:23:02 | INFO  | Setting property provided_until: none 2025-06-22 20:23:02.693868 | orchestrator | 2025-06-22 20:23:02 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2025-06-22 20:23:02.903993 | orchestrator | 2025-06-22 20:23:02 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2025-06-22 20:23:03.084736 | orchestrator | 2025-06-22 20:23:03 | INFO  | Setting property internal_version: 2025-06-22 2025-06-22 20:23:03.324300 | orchestrator | 2025-06-22 20:23:03 | INFO  | Setting property image_original_user: ubuntu 2025-06-22 20:23:03.550227 | orchestrator | 2025-06-22 20:23:03 | INFO  | Setting property os_version: 2025-06-22 2025-06-22 20:23:03.760607 | orchestrator | 2025-06-22 20:23:03 | INFO  | Setting property image_source: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250622.qcow2 2025-06-22 20:23:03.971561 | orchestrator | 2025-06-22 20:23:03 | INFO  | Setting property image_build_date: 2025-06-22 2025-06-22 20:23:04.179744 | orchestrator | 2025-06-22 20:23:04 | INFO  | Checking status of 'OpenStack Octavia Amphora 2025-06-22' 2025-06-22 20:23:04.179957 | orchestrator | 2025-06-22 20:23:04 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2025-06-22' 2025-06-22 20:23:04.327059 | orchestrator | 2025-06-22 20:23:04 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2025-06-22 20:23:04.327524 | orchestrator | 2025-06-22 20:23:04 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2025-06-22 20:23:04.328259 | orchestrator | 2025-06-22 20:23:04 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2025-06-22 20:23:04.329220 | orchestrator | 2025-06-22 20:23:04 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2025-06-22 20:23:05.141685 | orchestrator | ok: Runtime: 0:03:00.400192 2025-06-22 20:23:05.191452 | 2025-06-22 20:23:05.191556 | TASK [Run checks] 2025-06-22 20:23:05.858834 | orchestrator | + set -e 2025-06-22 20:23:05.859052 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-22 20:23:05.859089 | orchestrator | ++ export INTERACTIVE=false 2025-06-22 20:23:05.859125 | orchestrator | ++ INTERACTIVE=false 2025-06-22 20:23:05.859148 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-22 20:23:05.859170 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-22 20:23:05.859210 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-06-22 20:23:05.860588 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-06-22 20:23:05.866858 | orchestrator | 2025-06-22 20:23:05.866960 | orchestrator | # CHECK 2025-06-22 20:23:05.866975 | orchestrator | 2025-06-22 20:23:05.866988 | orchestrator | ++ export MANAGER_VERSION=9.1.0 2025-06-22 20:23:05.867004 | orchestrator | ++ MANAGER_VERSION=9.1.0 2025-06-22 20:23:05.867016 | orchestrator | + echo 2025-06-22 20:23:05.867027 | orchestrator | + echo '# CHECK' 2025-06-22 20:23:05.867038 | orchestrator | + echo 2025-06-22 20:23:05.867328 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-06-22 20:23:05.868826 | orchestrator | ++ semver 9.1.0 5.0.0 2025-06-22 20:23:05.932594 | orchestrator | 2025-06-22 20:23:05.932697 | orchestrator | ## Containers @ testbed-manager 2025-06-22 20:23:05.932712 | orchestrator | 2025-06-22 20:23:05.932725 | orchestrator | + [[ 1 -eq -1 ]] 2025-06-22 20:23:05.932737 | orchestrator | + echo 2025-06-22 20:23:05.932747 | orchestrator | + echo '## Containers @ testbed-manager' 2025-06-22 20:23:05.932759 | orchestrator | + echo 2025-06-22 20:23:05.932770 | orchestrator | + osism container testbed-manager ps 2025-06-22 20:23:08.022303 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-06-22 20:23:08.022498 | orchestrator | 209c6223725c registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes prometheus_blackbox_exporter 2025-06-22 20:23:08.022543 | orchestrator | fd4a4830174f registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes prometheus_alertmanager 2025-06-22 20:23:08.022574 | orchestrator | f7c1116222a7 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes prometheus_cadvisor 2025-06-22 20:23:08.022594 | orchestrator | 1878233b6686 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_node_exporter 2025-06-22 20:23:08.022613 | orchestrator | 21cb18764768 registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250530 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_server 2025-06-22 20:23:08.022633 | orchestrator | 255f4bbfb1ce registry.osism.tech/osism/cephclient:18.2.7 "/usr/bin/dumb-init …" 18 minutes ago Up 18 minutes cephclient 2025-06-22 20:23:08.022659 | orchestrator | 1e1447cd5495 registry.osism.tech/kolla/release/cron:3.0.20250530 "dumb-init --single-…" 30 minutes ago Up 30 minutes cron 2025-06-22 20:23:08.022679 | orchestrator | dce53b22c725 registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530 "dumb-init --single-…" 30 minutes ago Up 30 minutes kolla_toolbox 2025-06-22 20:23:08.022698 | orchestrator | 4dcd931cda06 registry.osism.tech/kolla/release/fluentd:5.0.7.20250530 "dumb-init --single-…" 31 minutes ago Up 31 minutes fluentd 2025-06-22 20:23:08.022756 | orchestrator | 1f091ef34d59 phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 31 minutes ago Up 31 minutes (healthy) 80/tcp phpmyadmin 2025-06-22 20:23:08.022778 | orchestrator | 3e685d8dff97 registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 32 minutes ago Up 32 minutes openstackclient 2025-06-22 20:23:08.022798 | orchestrator | 2efec805cd70 registry.osism.tech/osism/homer:v25.05.2 "/bin/sh /entrypoint…" 32 minutes ago Up 32 minutes (healthy) 8080/tcp homer 2025-06-22 20:23:08.022818 | orchestrator | 973dbf1b52d4 registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 53 minutes ago Up 53 minutes (healthy) 192.168.16.5:3128->3128/tcp squid 2025-06-22 20:23:08.022847 | orchestrator | ffb87bc62409 registry.osism.tech/osism/inventory-reconciler:0.20250530.0 "/sbin/tini -- /entr…" 57 minutes ago Up 39 minutes (healthy) manager-inventory_reconciler-1 2025-06-22 20:23:08.022897 | orchestrator | 217a43ee4211 registry.osism.tech/osism/ceph-ansible:0.20250530.0 "/entrypoint.sh osis…" 57 minutes ago Up 39 minutes (healthy) ceph-ansible 2025-06-22 20:23:08.022918 | orchestrator | d56a176b671b registry.osism.tech/osism/osism-kubernetes:0.20250530.0 "/entrypoint.sh osis…" 57 minutes ago Up 39 minutes (healthy) osism-kubernetes 2025-06-22 20:23:08.022938 | orchestrator | cc681cf4c7e9 registry.osism.tech/osism/osism-ansible:0.20250531.0 "/entrypoint.sh osis…" 57 minutes ago Up 39 minutes (healthy) osism-ansible 2025-06-22 20:23:08.022959 | orchestrator | 1b0d2809dda3 registry.osism.tech/osism/kolla-ansible:0.20250530.0 "/entrypoint.sh osis…" 57 minutes ago Up 39 minutes (healthy) kolla-ansible 2025-06-22 20:23:08.022980 | orchestrator | 17905b0175c6 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" 57 minutes ago Up 39 minutes (healthy) 8000/tcp manager-ara-server-1 2025-06-22 20:23:08.023002 | orchestrator | 1d97278d36db registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- sleep…" 57 minutes ago Up 40 minutes (healthy) osismclient 2025-06-22 20:23:08.023021 | orchestrator | 98bbb34cd42f registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" 57 minutes ago Up 40 minutes (healthy) manager-beat-1 2025-06-22 20:23:08.023037 | orchestrator | eb9d1e085d44 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" 57 minutes ago Up 40 minutes (healthy) manager-flower-1 2025-06-22 20:23:08.023054 | orchestrator | d2563a693899 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" 57 minutes ago Up 40 minutes (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2025-06-22 20:23:08.023084 | orchestrator | 7b2b5e85e6f1 registry.osism.tech/dockerhub/library/mariadb:11.7.2 "docker-entrypoint.s…" 57 minutes ago Up 40 minutes (healthy) 3306/tcp manager-mariadb-1 2025-06-22 20:23:08.023102 | orchestrator | f94a50ccc1c9 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" 57 minutes ago Up 40 minutes (healthy) manager-listener-1 2025-06-22 20:23:08.023119 | orchestrator | 8b2ab0ad44d8 registry.osism.tech/dockerhub/library/redis:7.4.4-alpine "docker-entrypoint.s…" 57 minutes ago Up 40 minutes (healthy) 6379/tcp manager-redis-1 2025-06-22 20:23:08.023136 | orchestrator | 286e07595da8 registry.osism.tech/osism/osism:0.20250530.0 "/sbin/tini -- osism…" 57 minutes ago Up 40 minutes (healthy) manager-openstack-1 2025-06-22 20:23:08.023155 | orchestrator | c0c5d1d0f32b registry.osism.tech/dockerhub/library/traefik:v3.4.1 "/entrypoint.sh trae…" 58 minutes ago Up 58 minutes (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2025-06-22 20:23:08.249597 | orchestrator | 2025-06-22 20:23:08.249701 | orchestrator | ## Images @ testbed-manager 2025-06-22 20:23:08.249716 | orchestrator | 2025-06-22 20:23:08.249728 | orchestrator | + echo 2025-06-22 20:23:08.249740 | orchestrator | + echo '## Images @ testbed-manager' 2025-06-22 20:23:08.249752 | orchestrator | + echo 2025-06-22 20:23:08.249763 | orchestrator | + osism container testbed-manager images 2025-06-22 20:23:10.283585 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-06-22 20:23:10.283694 | orchestrator | registry.osism.tech/osism/homer v25.05.2 e2c78a28297e 17 hours ago 11.5MB 2025-06-22 20:23:10.283714 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 31eca7c9891c 17 hours ago 226MB 2025-06-22 20:23:10.283726 | orchestrator | registry.osism.tech/osism/kolla-ansible 0.20250530.0 f5f0b51afbcc 2 weeks ago 574MB 2025-06-22 20:23:10.283737 | orchestrator | registry.osism.tech/osism/osism-ansible 0.20250531.0 eb6fb0ff8e52 3 weeks ago 578MB 2025-06-22 20:23:10.283773 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20250530 fc4477504c4f 3 weeks ago 319MB 2025-06-22 20:23:10.283785 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.4.1.20250530 33529d2e8ea7 3 weeks ago 747MB 2025-06-22 20:23:10.283795 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.7.20250530 a0c9ae28d2e7 3 weeks ago 629MB 2025-06-22 20:23:10.283806 | orchestrator | registry.osism.tech/kolla/release/prometheus-v2-server 2.55.1.20250530 48bb7d2c6b08 3 weeks ago 892MB 2025-06-22 20:23:10.283817 | orchestrator | registry.osism.tech/kolla/release/prometheus-blackbox-exporter 0.25.0.20250530 3d4c4d6fe7fa 3 weeks ago 361MB 2025-06-22 20:23:10.283827 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20250530 b51a156bac81 3 weeks ago 411MB 2025-06-22 20:23:10.283838 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20250530 a076e6a80bbc 3 weeks ago 359MB 2025-06-22 20:23:10.283848 | orchestrator | registry.osism.tech/kolla/release/prometheus-alertmanager 0.28.0.20250530 0e447338580d 3 weeks ago 457MB 2025-06-22 20:23:10.283859 | orchestrator | registry.osism.tech/osism/ceph-ansible 0.20250530.0 bce894afc91f 3 weeks ago 538MB 2025-06-22 20:23:10.283893 | orchestrator | registry.osism.tech/osism/osism-kubernetes 0.20250530.0 467731c31786 3 weeks ago 1.21GB 2025-06-22 20:23:10.283905 | orchestrator | registry.osism.tech/osism/inventory-reconciler 0.20250530.0 1b4e0cdc5cdd 3 weeks ago 308MB 2025-06-22 20:23:10.283916 | orchestrator | registry.osism.tech/osism/osism 0.20250530.0 bce098659f68 3 weeks ago 297MB 2025-06-22 20:23:10.283926 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.4-alpine 7ff232a1fe04 3 weeks ago 41.4MB 2025-06-22 20:23:10.283937 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.4.1 ff0a241c8a0a 3 weeks ago 224MB 2025-06-22 20:23:10.283947 | orchestrator | registry.osism.tech/osism/cephclient 18.2.7 ae977aa79826 6 weeks ago 453MB 2025-06-22 20:23:10.283958 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.7.2 6b3ebe9793bb 4 months ago 328MB 2025-06-22 20:23:10.283968 | orchestrator | phpmyadmin/phpmyadmin 5.2 0276a66ce322 4 months ago 571MB 2025-06-22 20:23:10.283979 | orchestrator | registry.osism.tech/osism/ara-server 1.7.2 bb44122eb176 9 months ago 300MB 2025-06-22 20:23:10.283989 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 12 months ago 146MB 2025-06-22 20:23:10.521417 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-06-22 20:23:10.522190 | orchestrator | ++ semver 9.1.0 5.0.0 2025-06-22 20:23:10.576144 | orchestrator | 2025-06-22 20:23:10.576248 | orchestrator | ## Containers @ testbed-node-0 2025-06-22 20:23:10.576263 | orchestrator | 2025-06-22 20:23:10.576276 | orchestrator | + [[ 1 -eq -1 ]] 2025-06-22 20:23:10.576287 | orchestrator | + echo 2025-06-22 20:23:10.576299 | orchestrator | + echo '## Containers @ testbed-node-0' 2025-06-22 20:23:10.576310 | orchestrator | + echo 2025-06-22 20:23:10.576321 | orchestrator | + osism container testbed-node-0 ps 2025-06-22 20:23:12.639169 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-06-22 20:23:12.639292 | orchestrator | 1e55b848a554 registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_novncproxy 2025-06-22 20:23:12.639309 | orchestrator | 5e19353dbae5 registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_conductor 2025-06-22 20:23:12.639322 | orchestrator | d0fe762d107e registry.osism.tech/kolla/release/nova-api:30.0.1.20250530 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_api 2025-06-22 20:23:12.639334 | orchestrator | b6accf7bca48 registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-06-22 20:23:12.639345 | orchestrator | a6727941c488 registry.osism.tech/kolla/release/grafana:12.0.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes grafana 2025-06-22 20:23:12.639357 | orchestrator | ba09799f6ed4 registry.osism.tech/kolla/release/glance-api:29.0.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) glance_api 2025-06-22 20:23:12.639368 | orchestrator | 2b266082b325 registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_scheduler 2025-06-22 20:23:12.639396 | orchestrator | 80aecd42c754 registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_api 2025-06-22 20:23:12.639408 | orchestrator | d766e0b8bbbe registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes prometheus_elasticsearch_exporter 2025-06-22 20:23:12.639477 | orchestrator | 3055481abaec registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_cadvisor 2025-06-22 20:23:12.639491 | orchestrator | 6a760c901093 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_memcached_exporter 2025-06-22 20:23:12.639502 | orchestrator | fef151c14e35 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_mysqld_exporter 2025-06-22 20:23:12.639513 | orchestrator | 8bb870e90c58 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_node_exporter 2025-06-22 20:23:12.639524 | orchestrator | 9f6b7b816ac7 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) magnum_conductor 2025-06-22 20:23:12.639535 | orchestrator | 66d6bfc3cdca registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) magnum_api 2025-06-22 20:23:12.639546 | orchestrator | 5e67e975b8ee registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) neutron_server 2025-06-22 20:23:12.639557 | orchestrator | f625f6b5d9e9 registry.osism.tech/kolla/release/placement-api:12.0.1.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) placement_api 2025-06-22 20:23:12.639568 | orchestrator | 9247d7f32b1e registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_worker 2025-06-22 20:23:12.639579 | orchestrator | bc3c886ca907 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_mdns 2025-06-22 20:23:12.639611 | orchestrator | b27865b145f6 registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_producer 2025-06-22 20:23:12.639623 | orchestrator | 5be617a3bb28 registry.osism.tech/kolla/release/designate-central:19.0.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_central 2025-06-22 20:23:12.639634 | orchestrator | c1b8eeb81ca4 registry.osism.tech/kolla/release/designate-api:19.0.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_api 2025-06-22 20:23:12.639644 | orchestrator | e125b205174b registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_backend_bind9 2025-06-22 20:23:12.639655 | orchestrator | 5d1d6873cae0 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_worker 2025-06-22 20:23:12.639666 | orchestrator | 96ffc68a50d1 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_keystone_listener 2025-06-22 20:23:12.639677 | orchestrator | 10c09e5d2e4e registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_api 2025-06-22 20:23:12.639694 | orchestrator | 29499dd1035c registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 16 minutes ago Up 16 minutes ceph-mgr-testbed-node-0 2025-06-22 20:23:12.639713 | orchestrator | fe9d1f08e50b registry.osism.tech/kolla/release/keystone:26.0.1.20250530 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone 2025-06-22 20:23:12.639724 | orchestrator | 3ed30027d0f6 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_fernet 2025-06-22 20:23:12.639735 | orchestrator | 4f5e8577442f registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_ssh 2025-06-22 20:23:12.639751 | orchestrator | ee15e16bd3ec registry.osism.tech/kolla/release/horizon:25.1.1.20250530 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) horizon 2025-06-22 20:23:12.639762 | orchestrator | 42a2a97de3f0 registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530 "dumb-init -- kolla_…" 21 minutes ago Up 21 minutes (healthy) mariadb 2025-06-22 20:23:12.639773 | orchestrator | 49ad24094ed0 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch_dashboards 2025-06-22 20:23:12.639783 | orchestrator | 70b45d9a705a registry.osism.tech/kolla/release/opensearch:2.19.2.20250530 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) opensearch 2025-06-22 20:23:12.639794 | orchestrator | d902cc0106fa registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" 23 minutes ago Up 23 minutes ceph-crash-testbed-node-0 2025-06-22 20:23:12.639805 | orchestrator | e97940da981f registry.osism.tech/kolla/release/keepalived:2.2.7.20250530 "dumb-init --single-…" 24 minutes ago Up 24 minutes keepalived 2025-06-22 20:23:12.639816 | orchestrator | 899d9757f266 registry.osism.tech/kolla/release/proxysql:2.7.3.20250530 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) proxysql 2025-06-22 20:23:12.639827 | orchestrator | 806e55b89c29 registry.osism.tech/kolla/release/haproxy:2.6.12.20250530 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) haproxy 2025-06-22 20:23:12.639843 | orchestrator | f201263fe873 registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_northd 2025-06-22 20:23:12.639853 | orchestrator | ef5e6ccdb416 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_sb_db 2025-06-22 20:23:12.639870 | orchestrator | e2fe919dfe5f registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_nb_db 2025-06-22 20:23:12.639882 | orchestrator | 8a30adac52b6 registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_controller 2025-06-22 20:23:12.639893 | orchestrator | b29e59de2599 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 28 minutes ago Up 28 minutes ceph-mon-testbed-node-0 2025-06-22 20:23:12.639903 | orchestrator | 3c5b77bf1284 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) rabbitmq 2025-06-22 20:23:12.639914 | orchestrator | d7e1f5cb0c71 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) openvswitch_vswitchd 2025-06-22 20:23:12.639931 | orchestrator | 2bda9b8db5b6 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) openvswitch_db 2025-06-22 20:23:12.639943 | orchestrator | e98a68d52db6 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis_sentinel 2025-06-22 20:23:12.639954 | orchestrator | 8a77fb3e7544 registry.osism.tech/kolla/release/redis:7.0.15.20250530 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis 2025-06-22 20:23:12.639964 | orchestrator | f6ecfa183063 registry.osism.tech/kolla/release/memcached:1.6.18.20250530 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) memcached 2025-06-22 20:23:12.639975 | orchestrator | 37848fc89789 registry.osism.tech/kolla/release/cron:3.0.20250530 "dumb-init --single-…" 30 minutes ago Up 30 minutes cron 2025-06-22 20:23:12.639986 | orchestrator | fd04537010f3 registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530 "dumb-init --single-…" 31 minutes ago Up 31 minutes kolla_toolbox 2025-06-22 20:23:12.639997 | orchestrator | a6d2902624e9 registry.osism.tech/kolla/release/fluentd:5.0.7.20250530 "dumb-init --single-…" 32 minutes ago Up 32 minutes fluentd 2025-06-22 20:23:12.881202 | orchestrator | 2025-06-22 20:23:12.881306 | orchestrator | ## Images @ testbed-node-0 2025-06-22 20:23:12.881322 | orchestrator | 2025-06-22 20:23:12.881333 | orchestrator | + echo 2025-06-22 20:23:12.881345 | orchestrator | + echo '## Images @ testbed-node-0' 2025-06-22 20:23:12.881357 | orchestrator | + echo 2025-06-22 20:23:12.881368 | orchestrator | + osism container testbed-node-0 images 2025-06-22 20:23:15.024710 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-06-22 20:23:15.024825 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.18.20250530 174e220ad7bd 3 weeks ago 319MB 2025-06-22 20:23:15.024840 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20250530 fc4477504c4f 3 weeks ago 319MB 2025-06-22 20:23:15.024853 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.7.20250530 e984e28a57b0 3 weeks ago 330MB 2025-06-22 20:23:15.024865 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.2.20250530 4cfdb500286b 3 weeks ago 1.59GB 2025-06-22 20:23:15.024885 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.2.20250530 6fcb2e3a907b 3 weeks ago 1.55GB 2025-06-22 20:23:15.024903 | orchestrator | registry.osism.tech/kolla/release/proxysql 2.7.3.20250530 a15c96a3369b 3 weeks ago 419MB 2025-06-22 20:23:15.024922 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.4.1.20250530 33529d2e8ea7 3 weeks ago 747MB 2025-06-22 20:23:15.024941 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.6.12.20250530 e5b003449f46 3 weeks ago 327MB 2025-06-22 20:23:15.024953 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20250530 6b32f249a415 3 weeks ago 376MB 2025-06-22 20:23:15.024964 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.7.20250530 a0c9ae28d2e7 3 weeks ago 629MB 2025-06-22 20:23:15.024975 | orchestrator | registry.osism.tech/kolla/release/grafana 12.0.1.20250530 a3fa8a6a4c8c 3 weeks ago 1.01GB 2025-06-22 20:23:15.024985 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.13.20250530 5a4e6980c376 3 weeks ago 591MB 2025-06-22 20:23:15.024996 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20250530 acd5d7cf8545 3 weeks ago 354MB 2025-06-22 20:23:15.025031 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20250530 b51a156bac81 3 weeks ago 411MB 2025-06-22 20:23:15.025043 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20250530 528199032acc 3 weeks ago 352MB 2025-06-22 20:23:15.025054 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20250530 1ba9b68ab0fa 3 weeks ago 345MB 2025-06-22 20:23:15.025064 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20250530 a076e6a80bbc 3 weeks ago 359MB 2025-06-22 20:23:15.025075 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20250530 854fb3fbb8d1 3 weeks ago 326MB 2025-06-22 20:23:15.025086 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20250530 4439f43e0847 3 weeks ago 325MB 2025-06-22 20:23:15.025115 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.1.20250530 81218760d1ef 3 weeks ago 1.21GB 2025-06-22 20:23:15.025127 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.2.20250530 8775c34ea5d6 3 weeks ago 362MB 2025-06-22 20:23:15.025137 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.2.20250530 ebe56e768165 3 weeks ago 362MB 2025-06-22 20:23:15.025148 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20250530 9ac54d9b8655 3 weeks ago 1.15GB 2025-06-22 20:23:15.025159 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20250530 95e52651071a 3 weeks ago 1.04GB 2025-06-22 20:23:15.025169 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.1.1.20250530 47338d40fcbf 3 weeks ago 1.25GB 2025-06-22 20:23:15.025180 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20250530 ec3349a6437e 3 weeks ago 1.04GB 2025-06-22 20:23:15.025191 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20250530 726d5cfde6f9 3 weeks ago 1.04GB 2025-06-22 20:23:15.025201 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20250530 c2f966fc60ed 3 weeks ago 1.04GB 2025-06-22 20:23:15.025212 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20250530 7c85bdb64788 3 weeks ago 1.04GB 2025-06-22 20:23:15.025223 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20250530 ecd3067dd808 3 weeks ago 1.2GB 2025-06-22 20:23:15.025233 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20250530 95661613cfe8 3 weeks ago 1.31GB 2025-06-22 20:23:15.025268 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.1.20250530 41afac8ed4ba 3 weeks ago 1.12GB 2025-06-22 20:23:15.025280 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.1.20250530 816eaef08c5c 3 weeks ago 1.12GB 2025-06-22 20:23:15.025290 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.1.20250530 81c4f823534a 3 weeks ago 1.1GB 2025-06-22 20:23:15.025301 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.1.20250530 437ecd9dcceb 3 weeks ago 1.1GB 2025-06-22 20:23:15.025312 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.1.20250530 fd10912df5f8 3 weeks ago 1.1GB 2025-06-22 20:23:15.025322 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.1.1.20250530 8e97f769e43d 3 weeks ago 1.41GB 2025-06-22 20:23:15.025333 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.1.1.20250530 1a292444fc87 3 weeks ago 1.41GB 2025-06-22 20:23:15.025343 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20250530 9186d487d48c 3 weeks ago 1.06GB 2025-06-22 20:23:15.025354 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20250530 14234b919f18 3 weeks ago 1.06GB 2025-06-22 20:23:15.025372 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20250530 57148ade6082 3 weeks ago 1.05GB 2025-06-22 20:23:15.025383 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20250530 6d21806eb92e 3 weeks ago 1.05GB 2025-06-22 20:23:15.025394 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20250530 d5f39127ee53 3 weeks ago 1.05GB 2025-06-22 20:23:15.025405 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20250530 68be509d15c9 3 weeks ago 1.05GB 2025-06-22 20:23:15.025421 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.0.20250530 aa9066568160 3 weeks ago 1.04GB 2025-06-22 20:23:15.025432 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.0.20250530 546dea2f2472 3 weeks ago 1.04GB 2025-06-22 20:23:15.025520 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.0.1.20250530 47425e7b5ce1 3 weeks ago 1.3GB 2025-06-22 20:23:15.025532 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.0.1.20250530 9fd4859cd2ca 3 weeks ago 1.29GB 2025-06-22 20:23:15.025543 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.0.1.20250530 65e1e2f12329 3 weeks ago 1.42GB 2025-06-22 20:23:15.025554 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.0.1.20250530 ded754c3e240 3 weeks ago 1.29GB 2025-06-22 20:23:15.025564 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20250530 dc06d9c53ec5 3 weeks ago 1.06GB 2025-06-22 20:23:15.025575 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20250530 450ccd1a2872 3 weeks ago 1.06GB 2025-06-22 20:23:15.025586 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20250530 2f34913753bd 3 weeks ago 1.06GB 2025-06-22 20:23:15.025596 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20250530 fe53c77abc4a 3 weeks ago 1.11GB 2025-06-22 20:23:15.025607 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20250530 0419c85d82ab 3 weeks ago 1.13GB 2025-06-22 20:23:15.025617 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20250530 7eb5295204d1 3 weeks ago 1.11GB 2025-06-22 20:23:15.025628 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20250530 df0a04869ff0 3 weeks ago 1.11GB 2025-06-22 20:23:15.025638 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20250530 e1b2b0cc8e5c 3 weeks ago 1.12GB 2025-06-22 20:23:15.025649 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.2.20250530 6a22761bd4f3 3 weeks ago 947MB 2025-06-22 20:23:15.025659 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.2.20250530 694606382374 3 weeks ago 948MB 2025-06-22 20:23:15.025670 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.2.20250530 63ebc77afae1 3 weeks ago 947MB 2025-06-22 20:23:15.025680 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.2.20250530 5b8b94e53819 3 weeks ago 948MB 2025-06-22 20:23:15.025691 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 6 weeks ago 1.27GB 2025-06-22 20:23:15.276188 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-06-22 20:23:15.276986 | orchestrator | ++ semver 9.1.0 5.0.0 2025-06-22 20:23:15.337389 | orchestrator | 2025-06-22 20:23:15.337545 | orchestrator | ## Containers @ testbed-node-1 2025-06-22 20:23:15.337561 | orchestrator | 2025-06-22 20:23:15.337573 | orchestrator | + [[ 1 -eq -1 ]] 2025-06-22 20:23:15.337584 | orchestrator | + echo 2025-06-22 20:23:15.337596 | orchestrator | + echo '## Containers @ testbed-node-1' 2025-06-22 20:23:15.337609 | orchestrator | + echo 2025-06-22 20:23:15.337645 | orchestrator | + osism container testbed-node-1 ps 2025-06-22 20:23:17.525588 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-06-22 20:23:17.525688 | orchestrator | df107518f15b registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530 "dumb-init --single-…" 7 minutes ago Up 6 minutes (healthy) nova_novncproxy 2025-06-22 20:23:17.525703 | orchestrator | 208bc06c470c registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_conductor 2025-06-22 20:23:17.525715 | orchestrator | e55150b9f648 registry.osism.tech/kolla/release/grafana:12.0.1.20250530 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2025-06-22 20:23:17.525726 | orchestrator | 58c2a427eafb registry.osism.tech/kolla/release/nova-api:30.0.1.20250530 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_api 2025-06-22 20:23:17.525755 | orchestrator | 2603660bfcef registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_scheduler 2025-06-22 20:23:17.525767 | orchestrator | 6c8c2173f137 registry.osism.tech/kolla/release/glance-api:29.0.1.20250530 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) glance_api 2025-06-22 20:23:17.525778 | orchestrator | d4c7e0c62443 registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_scheduler 2025-06-22 20:23:17.525788 | orchestrator | f258e3322821 registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_api 2025-06-22 20:23:17.525799 | orchestrator | 075f6f039cc2 registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes prometheus_elasticsearch_exporter 2025-06-22 20:23:17.525813 | orchestrator | 613453be6657 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_cadvisor 2025-06-22 20:23:17.525824 | orchestrator | 403ed0bc1f72 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_memcached_exporter 2025-06-22 20:23:17.525834 | orchestrator | 0ef21e6cf93d registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_mysqld_exporter 2025-06-22 20:23:17.525845 | orchestrator | 2242b3eaddd8 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_node_exporter 2025-06-22 20:23:17.525856 | orchestrator | 718ff1cc2f7e registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) magnum_conductor 2025-06-22 20:23:17.525867 | orchestrator | 174706259ac9 registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) magnum_api 2025-06-22 20:23:17.525877 | orchestrator | 0a3fd0da78bc registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) neutron_server 2025-06-22 20:23:17.525888 | orchestrator | 5a6fc0f72ab9 registry.osism.tech/kolla/release/placement-api:12.0.1.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) placement_api 2025-06-22 20:23:17.525924 | orchestrator | 267d9c1bee5b registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_worker 2025-06-22 20:23:17.525936 | orchestrator | 82225a26d3d7 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_mdns 2025-06-22 20:23:17.525964 | orchestrator | f5bb06cbe64f registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_producer 2025-06-22 20:23:17.525976 | orchestrator | b60be44225cc registry.osism.tech/kolla/release/designate-central:19.0.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_central 2025-06-22 20:23:17.525986 | orchestrator | c9994d5b00f8 registry.osism.tech/kolla/release/designate-api:19.0.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_api 2025-06-22 20:23:17.525997 | orchestrator | e24081e2b5dd registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_backend_bind9 2025-06-22 20:23:17.526013 | orchestrator | 16f3024eb87d registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_worker 2025-06-22 20:23:17.526091 | orchestrator | c43a4448b7b8 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_keystone_listener 2025-06-22 20:23:17.526105 | orchestrator | 3ff18bf92544 registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_api 2025-06-22 20:23:17.526118 | orchestrator | 788d05a72382 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 16 minutes ago Up 16 minutes ceph-mgr-testbed-node-1 2025-06-22 20:23:17.526130 | orchestrator | 00042c3fe689 registry.osism.tech/kolla/release/keystone:26.0.1.20250530 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone 2025-06-22 20:23:17.526143 | orchestrator | a55b39d0d05f registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_fernet 2025-06-22 20:23:17.526156 | orchestrator | f73b5d4480ff registry.osism.tech/kolla/release/horizon:25.1.1.20250530 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) horizon 2025-06-22 20:23:17.526168 | orchestrator | b12497f1abdd registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_ssh 2025-06-22 20:23:17.526180 | orchestrator | d4b8a0a33bb7 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch_dashboards 2025-06-22 20:23:17.526192 | orchestrator | ceb7b42c5846 registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530 "dumb-init -- kolla_…" 22 minutes ago Up 22 minutes (healthy) mariadb 2025-06-22 20:23:17.526204 | orchestrator | 0d3e2a89e3a5 registry.osism.tech/kolla/release/opensearch:2.19.2.20250530 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch 2025-06-22 20:23:17.526217 | orchestrator | bfc82ce1a331 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" 23 minutes ago Up 23 minutes ceph-crash-testbed-node-1 2025-06-22 20:23:17.526237 | orchestrator | 106eca295873 registry.osism.tech/kolla/release/keepalived:2.2.7.20250530 "dumb-init --single-…" 24 minutes ago Up 24 minutes keepalived 2025-06-22 20:23:17.526250 | orchestrator | 35126b813914 registry.osism.tech/kolla/release/proxysql:2.7.3.20250530 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) proxysql 2025-06-22 20:23:17.526262 | orchestrator | c1d9e172246b registry.osism.tech/kolla/release/haproxy:2.6.12.20250530 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) haproxy 2025-06-22 20:23:17.526274 | orchestrator | effc4a3ca7bd registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530 "dumb-init --single-…" 27 minutes ago Up 26 minutes ovn_northd 2025-06-22 20:23:17.526286 | orchestrator | 8754afa9a7a7 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_sb_db 2025-06-22 20:23:17.526306 | orchestrator | b156cc72cdd6 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_nb_db 2025-06-22 20:23:17.526319 | orchestrator | 7d759e7ca5c9 registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_controller 2025-06-22 20:23:17.526331 | orchestrator | f0dc930ffa2a registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) rabbitmq 2025-06-22 20:23:17.526344 | orchestrator | 44161ea800c5 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 28 minutes ago Up 28 minutes ceph-mon-testbed-node-1 2025-06-22 20:23:17.526355 | orchestrator | eb575b7d6ed3 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) openvswitch_vswitchd 2025-06-22 20:23:17.526368 | orchestrator | ba6c53603503 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) openvswitch_db 2025-06-22 20:23:17.526380 | orchestrator | ad9c7a67fc56 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis_sentinel 2025-06-22 20:23:17.526397 | orchestrator | 6e596ade391e registry.osism.tech/kolla/release/redis:7.0.15.20250530 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis 2025-06-22 20:23:17.526410 | orchestrator | f53c3db59ee4 registry.osism.tech/kolla/release/memcached:1.6.18.20250530 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) memcached 2025-06-22 20:23:17.526422 | orchestrator | 11bc25fb11cc registry.osism.tech/kolla/release/cron:3.0.20250530 "dumb-init --single-…" 30 minutes ago Up 30 minutes cron 2025-06-22 20:23:17.526434 | orchestrator | 3ec53572617a registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530 "dumb-init --single-…" 31 minutes ago Up 31 minutes kolla_toolbox 2025-06-22 20:23:17.526465 | orchestrator | 7a9c047928ba registry.osism.tech/kolla/release/fluentd:5.0.7.20250530 "dumb-init --single-…" 31 minutes ago Up 31 minutes fluentd 2025-06-22 20:23:17.748057 | orchestrator | 2025-06-22 20:23:17.748154 | orchestrator | ## Images @ testbed-node-1 2025-06-22 20:23:17.748169 | orchestrator | 2025-06-22 20:23:17.748181 | orchestrator | + echo 2025-06-22 20:23:17.748192 | orchestrator | + echo '## Images @ testbed-node-1' 2025-06-22 20:23:17.748205 | orchestrator | + echo 2025-06-22 20:23:17.748216 | orchestrator | + osism container testbed-node-1 images 2025-06-22 20:23:19.844867 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-06-22 20:23:19.845013 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.18.20250530 174e220ad7bd 3 weeks ago 319MB 2025-06-22 20:23:19.845029 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20250530 fc4477504c4f 3 weeks ago 319MB 2025-06-22 20:23:19.845040 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.7.20250530 e984e28a57b0 3 weeks ago 330MB 2025-06-22 20:23:19.845051 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.2.20250530 4cfdb500286b 3 weeks ago 1.59GB 2025-06-22 20:23:19.845062 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.2.20250530 6fcb2e3a907b 3 weeks ago 1.55GB 2025-06-22 20:23:19.845072 | orchestrator | registry.osism.tech/kolla/release/proxysql 2.7.3.20250530 a15c96a3369b 3 weeks ago 419MB 2025-06-22 20:23:19.845083 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.4.1.20250530 33529d2e8ea7 3 weeks ago 747MB 2025-06-22 20:23:19.845094 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.6.12.20250530 e5b003449f46 3 weeks ago 327MB 2025-06-22 20:23:19.845104 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20250530 6b32f249a415 3 weeks ago 376MB 2025-06-22 20:23:19.845115 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.7.20250530 a0c9ae28d2e7 3 weeks ago 629MB 2025-06-22 20:23:19.845125 | orchestrator | registry.osism.tech/kolla/release/grafana 12.0.1.20250530 a3fa8a6a4c8c 3 weeks ago 1.01GB 2025-06-22 20:23:19.845136 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.13.20250530 5a4e6980c376 3 weeks ago 591MB 2025-06-22 20:23:19.845146 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20250530 acd5d7cf8545 3 weeks ago 354MB 2025-06-22 20:23:19.845157 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20250530 528199032acc 3 weeks ago 352MB 2025-06-22 20:23:19.845167 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20250530 b51a156bac81 3 weeks ago 411MB 2025-06-22 20:23:19.845178 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20250530 1ba9b68ab0fa 3 weeks ago 345MB 2025-06-22 20:23:19.845189 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20250530 a076e6a80bbc 3 weeks ago 359MB 2025-06-22 20:23:19.845199 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20250530 4439f43e0847 3 weeks ago 325MB 2025-06-22 20:23:19.845209 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20250530 854fb3fbb8d1 3 weeks ago 326MB 2025-06-22 20:23:19.845220 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.1.20250530 81218760d1ef 3 weeks ago 1.21GB 2025-06-22 20:23:19.845231 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.2.20250530 8775c34ea5d6 3 weeks ago 362MB 2025-06-22 20:23:19.845242 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.2.20250530 ebe56e768165 3 weeks ago 362MB 2025-06-22 20:23:19.845253 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20250530 9ac54d9b8655 3 weeks ago 1.15GB 2025-06-22 20:23:19.845264 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20250530 95e52651071a 3 weeks ago 1.04GB 2025-06-22 20:23:19.845274 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.1.1.20250530 47338d40fcbf 3 weeks ago 1.25GB 2025-06-22 20:23:19.845285 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20250530 ecd3067dd808 3 weeks ago 1.2GB 2025-06-22 20:23:19.845303 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20250530 95661613cfe8 3 weeks ago 1.31GB 2025-06-22 20:23:19.845314 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.1.1.20250530 8e97f769e43d 3 weeks ago 1.41GB 2025-06-22 20:23:19.845324 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.1.1.20250530 1a292444fc87 3 weeks ago 1.41GB 2025-06-22 20:23:19.845335 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20250530 9186d487d48c 3 weeks ago 1.06GB 2025-06-22 20:23:19.845346 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20250530 14234b919f18 3 weeks ago 1.06GB 2025-06-22 20:23:19.845393 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20250530 57148ade6082 3 weeks ago 1.05GB 2025-06-22 20:23:19.845406 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20250530 6d21806eb92e 3 weeks ago 1.05GB 2025-06-22 20:23:19.845416 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20250530 d5f39127ee53 3 weeks ago 1.05GB 2025-06-22 20:23:19.845427 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20250530 68be509d15c9 3 weeks ago 1.05GB 2025-06-22 20:23:19.845437 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.0.1.20250530 47425e7b5ce1 3 weeks ago 1.3GB 2025-06-22 20:23:19.845489 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.0.1.20250530 9fd4859cd2ca 3 weeks ago 1.29GB 2025-06-22 20:23:19.845500 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.0.1.20250530 65e1e2f12329 3 weeks ago 1.42GB 2025-06-22 20:23:19.845511 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.0.1.20250530 ded754c3e240 3 weeks ago 1.29GB 2025-06-22 20:23:19.845521 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20250530 dc06d9c53ec5 3 weeks ago 1.06GB 2025-06-22 20:23:19.845531 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20250530 450ccd1a2872 3 weeks ago 1.06GB 2025-06-22 20:23:19.845542 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20250530 2f34913753bd 3 weeks ago 1.06GB 2025-06-22 20:23:19.845557 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20250530 fe53c77abc4a 3 weeks ago 1.11GB 2025-06-22 20:23:19.845568 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20250530 0419c85d82ab 3 weeks ago 1.13GB 2025-06-22 20:23:19.845602 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20250530 7eb5295204d1 3 weeks ago 1.11GB 2025-06-22 20:23:19.845614 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.2.20250530 6a22761bd4f3 3 weeks ago 947MB 2025-06-22 20:23:19.845624 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.2.20250530 63ebc77afae1 3 weeks ago 947MB 2025-06-22 20:23:19.845635 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.2.20250530 694606382374 3 weeks ago 948MB 2025-06-22 20:23:19.845645 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.2.20250530 5b8b94e53819 3 weeks ago 948MB 2025-06-22 20:23:19.845656 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 6 weeks ago 1.27GB 2025-06-22 20:23:20.097942 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-06-22 20:23:20.098370 | orchestrator | ++ semver 9.1.0 5.0.0 2025-06-22 20:23:20.153761 | orchestrator | 2025-06-22 20:23:20.153850 | orchestrator | ## Containers @ testbed-node-2 2025-06-22 20:23:20.153864 | orchestrator | 2025-06-22 20:23:20.153874 | orchestrator | + [[ 1 -eq -1 ]] 2025-06-22 20:23:20.153884 | orchestrator | + echo 2025-06-22 20:23:20.153919 | orchestrator | + echo '## Containers @ testbed-node-2' 2025-06-22 20:23:20.153930 | orchestrator | + echo 2025-06-22 20:23:20.153940 | orchestrator | + osism container testbed-node-2 ps 2025-06-22 20:23:22.285862 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-06-22 20:23:22.285970 | orchestrator | a3f549fc3407 registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250530 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_novncproxy 2025-06-22 20:23:22.285986 | orchestrator | 9c2a4ba1f0b3 registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250530 "dumb-init --single-…" 7 minutes ago Up 7 minutes (healthy) nova_conductor 2025-06-22 20:23:22.285998 | orchestrator | 3483b637da31 registry.osism.tech/kolla/release/grafana:12.0.1.20250530 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2025-06-22 20:23:22.286010 | orchestrator | f50b4775c14c registry.osism.tech/kolla/release/nova-api:30.0.1.20250530 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) nova_api 2025-06-22 20:23:22.286110 | orchestrator | ae23c4a17065 registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250530 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-06-22 20:23:22.286122 | orchestrator | da330c595507 registry.osism.tech/kolla/release/glance-api:29.0.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) glance_api 2025-06-22 20:23:22.286133 | orchestrator | f5a7cafe6bec registry.osism.tech/kolla/release/cinder-scheduler:25.1.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_scheduler 2025-06-22 20:23:22.286144 | orchestrator | d7a8be944728 registry.osism.tech/kolla/release/cinder-api:25.1.1.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) cinder_api 2025-06-22 20:23:22.286155 | orchestrator | 23fb3f698d4e registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250530 "dumb-init --single-…" 11 minutes ago Up 11 minutes prometheus_elasticsearch_exporter 2025-06-22 20:23:22.286168 | orchestrator | 0ecba70d303a registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_cadvisor 2025-06-22 20:23:22.286179 | orchestrator | d768f800d6d4 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250530 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_memcached_exporter 2025-06-22 20:23:22.286189 | orchestrator | fcbf4df2d514 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250530 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_mysqld_exporter 2025-06-22 20:23:22.286200 | orchestrator | 0efc41203a85 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530 "dumb-init --single-…" 12 minutes ago Up 12 minutes prometheus_node_exporter 2025-06-22 20:23:22.286211 | orchestrator | 100df19c51ce registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) magnum_conductor 2025-06-22 20:23:22.286231 | orchestrator | 4b93b7486132 registry.osism.tech/kolla/release/magnum-api:19.0.1.20250530 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) magnum_api 2025-06-22 20:23:22.286243 | orchestrator | 2e8a25cd796c registry.osism.tech/kolla/release/neutron-server:25.1.1.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) neutron_server 2025-06-22 20:23:22.286254 | orchestrator | 100ea9459ae6 registry.osism.tech/kolla/release/placement-api:12.0.1.20250530 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) placement_api 2025-06-22 20:23:22.286283 | orchestrator | 24d9a0b49dac registry.osism.tech/kolla/release/designate-worker:19.0.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_worker 2025-06-22 20:23:22.286295 | orchestrator | 60972eed61d0 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_mdns 2025-06-22 20:23:22.286305 | orchestrator | 512343e83bdb registry.osism.tech/kolla/release/designate-producer:19.0.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_producer 2025-06-22 20:23:22.286333 | orchestrator | e7ebc30c9194 registry.osism.tech/kolla/release/designate-central:19.0.1.20250530 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) designate_central 2025-06-22 20:23:22.286344 | orchestrator | 10759f5c1ac1 registry.osism.tech/kolla/release/designate-api:19.0.1.20250530 "dumb-init --single-…" 16 minutes ago Up 15 minutes (healthy) designate_api 2025-06-22 20:23:22.286355 | orchestrator | fb32461a6d0d registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) designate_backend_bind9 2025-06-22 20:23:22.286365 | orchestrator | 35e5233c9b13 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_worker 2025-06-22 20:23:22.286376 | orchestrator | 437a5acfc97e registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_keystone_listener 2025-06-22 20:23:22.286387 | orchestrator | 5850f9a0a940 registry.osism.tech/kolla/release/barbican-api:19.0.1.20250530 "dumb-init --single-…" 16 minutes ago Up 16 minutes (healthy) barbican_api 2025-06-22 20:23:22.286397 | orchestrator | ced89008249c registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 16 minutes ago Up 16 minutes ceph-mgr-testbed-node-2 2025-06-22 20:23:22.286408 | orchestrator | e2960c6a2187 registry.osism.tech/kolla/release/keystone:26.0.1.20250530 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone 2025-06-22 20:23:22.286419 | orchestrator | f347b750c977 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250530 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_fernet 2025-06-22 20:23:22.286431 | orchestrator | 2e632430c032 registry.osism.tech/kolla/release/horizon:25.1.1.20250530 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) horizon 2025-06-22 20:23:22.286467 | orchestrator | 71ad0e1672c2 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250530 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_ssh 2025-06-22 20:23:22.286480 | orchestrator | 5e3a1edcd7b8 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250530 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch_dashboards 2025-06-22 20:23:22.286492 | orchestrator | f59b19f7fb84 registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250530 "dumb-init -- kolla_…" 21 minutes ago Up 21 minutes (healthy) mariadb 2025-06-22 20:23:22.286504 | orchestrator | 8a14ac9dc756 registry.osism.tech/kolla/release/opensearch:2.19.2.20250530 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch 2025-06-22 20:23:22.286516 | orchestrator | 1d36f07e9a53 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" 23 minutes ago Up 23 minutes ceph-crash-testbed-node-2 2025-06-22 20:23:22.286534 | orchestrator | 028aa8e9aaf3 registry.osism.tech/kolla/release/keepalived:2.2.7.20250530 "dumb-init --single-…" 24 minutes ago Up 24 minutes keepalived 2025-06-22 20:23:22.286546 | orchestrator | 54bcd6ab414f registry.osism.tech/kolla/release/proxysql:2.7.3.20250530 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) proxysql 2025-06-22 20:23:22.286559 | orchestrator | f832b5786a3d registry.osism.tech/kolla/release/haproxy:2.6.12.20250530 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) haproxy 2025-06-22 20:23:22.286571 | orchestrator | bc89b0bbc330 registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250530 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_northd 2025-06-22 20:23:22.286583 | orchestrator | c7c3e66d8890 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250530 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_sb_db 2025-06-22 20:23:22.286595 | orchestrator | b9bf37764241 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250530 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_nb_db 2025-06-22 20:23:22.286613 | orchestrator | 473ee04d4e9b registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250530 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) rabbitmq 2025-06-22 20:23:22.286625 | orchestrator | 06f4a90617dd registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_controller 2025-06-22 20:23:22.286638 | orchestrator | 36d6c939f330 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 28 minutes ago Up 28 minutes ceph-mon-testbed-node-2 2025-06-22 20:23:22.286651 | orchestrator | e9fd234448f8 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) openvswitch_vswitchd 2025-06-22 20:23:22.286663 | orchestrator | 2f75ae0a1a92 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) openvswitch_db 2025-06-22 20:23:22.286675 | orchestrator | 0c58f23b2ce3 registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250530 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis_sentinel 2025-06-22 20:23:22.286692 | orchestrator | a6777a404845 registry.osism.tech/kolla/release/redis:7.0.15.20250530 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis 2025-06-22 20:23:22.286705 | orchestrator | eedbbcfe00ff registry.osism.tech/kolla/release/memcached:1.6.18.20250530 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) memcached 2025-06-22 20:23:22.286718 | orchestrator | 30f8b60137cc registry.osism.tech/kolla/release/cron:3.0.20250530 "dumb-init --single-…" 30 minutes ago Up 30 minutes cron 2025-06-22 20:23:22.286729 | orchestrator | 90b90cf3ce5b registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530 "dumb-init --single-…" 31 minutes ago Up 31 minutes kolla_toolbox 2025-06-22 20:23:22.286742 | orchestrator | 4127fb8b332a registry.osism.tech/kolla/release/fluentd:5.0.7.20250530 "dumb-init --single-…" 31 minutes ago Up 31 minutes fluentd 2025-06-22 20:23:22.514497 | orchestrator | 2025-06-22 20:23:22.514607 | orchestrator | ## Images @ testbed-node-2 2025-06-22 20:23:22.514622 | orchestrator | 2025-06-22 20:23:22.514634 | orchestrator | + echo 2025-06-22 20:23:22.514646 | orchestrator | + echo '## Images @ testbed-node-2' 2025-06-22 20:23:22.514657 | orchestrator | + echo 2025-06-22 20:23:22.514690 | orchestrator | + osism container testbed-node-2 images 2025-06-22 20:23:24.662005 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-06-22 20:23:24.662185 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.18.20250530 174e220ad7bd 3 weeks ago 319MB 2025-06-22 20:23:24.662201 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20250530 fc4477504c4f 3 weeks ago 319MB 2025-06-22 20:23:24.662213 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.7.20250530 e984e28a57b0 3 weeks ago 330MB 2025-06-22 20:23:24.662239 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.2.20250530 4cfdb500286b 3 weeks ago 1.59GB 2025-06-22 20:23:24.662251 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.2.20250530 6fcb2e3a907b 3 weeks ago 1.55GB 2025-06-22 20:23:24.662262 | orchestrator | registry.osism.tech/kolla/release/proxysql 2.7.3.20250530 a15c96a3369b 3 weeks ago 419MB 2025-06-22 20:23:24.662272 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.4.1.20250530 33529d2e8ea7 3 weeks ago 747MB 2025-06-22 20:23:24.662283 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.6.12.20250530 e5b003449f46 3 weeks ago 327MB 2025-06-22 20:23:24.662294 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20250530 6b32f249a415 3 weeks ago 376MB 2025-06-22 20:23:24.662305 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.7.20250530 a0c9ae28d2e7 3 weeks ago 629MB 2025-06-22 20:23:24.662316 | orchestrator | registry.osism.tech/kolla/release/grafana 12.0.1.20250530 a3fa8a6a4c8c 3 weeks ago 1.01GB 2025-06-22 20:23:24.662326 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.13.20250530 5a4e6980c376 3 weeks ago 591MB 2025-06-22 20:23:24.662337 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20250530 acd5d7cf8545 3 weeks ago 354MB 2025-06-22 20:23:24.662348 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20250530 b51a156bac81 3 weeks ago 411MB 2025-06-22 20:23:24.662362 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20250530 528199032acc 3 weeks ago 352MB 2025-06-22 20:23:24.662382 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20250530 1ba9b68ab0fa 3 weeks ago 345MB 2025-06-22 20:23:24.662396 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20250530 a076e6a80bbc 3 weeks ago 359MB 2025-06-22 20:23:24.662407 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20250530 4439f43e0847 3 weeks ago 325MB 2025-06-22 20:23:24.662417 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20250530 854fb3fbb8d1 3 weeks ago 326MB 2025-06-22 20:23:24.662428 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.1.20250530 81218760d1ef 3 weeks ago 1.21GB 2025-06-22 20:23:24.662439 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.2.20250530 8775c34ea5d6 3 weeks ago 362MB 2025-06-22 20:23:24.662492 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.2.20250530 ebe56e768165 3 weeks ago 362MB 2025-06-22 20:23:24.662504 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20250530 9ac54d9b8655 3 weeks ago 1.15GB 2025-06-22 20:23:24.662514 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20250530 95e52651071a 3 weeks ago 1.04GB 2025-06-22 20:23:24.662525 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.1.1.20250530 47338d40fcbf 3 weeks ago 1.25GB 2025-06-22 20:23:24.662559 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20250530 ecd3067dd808 3 weeks ago 1.2GB 2025-06-22 20:23:24.662571 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20250530 95661613cfe8 3 weeks ago 1.31GB 2025-06-22 20:23:24.662581 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.1.1.20250530 8e97f769e43d 3 weeks ago 1.41GB 2025-06-22 20:23:24.662592 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.1.1.20250530 1a292444fc87 3 weeks ago 1.41GB 2025-06-22 20:23:24.662602 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20250530 9186d487d48c 3 weeks ago 1.06GB 2025-06-22 20:23:24.662613 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20250530 14234b919f18 3 weeks ago 1.06GB 2025-06-22 20:23:24.662643 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20250530 57148ade6082 3 weeks ago 1.05GB 2025-06-22 20:23:24.662654 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20250530 6d21806eb92e 3 weeks ago 1.05GB 2025-06-22 20:23:24.662664 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20250530 d5f39127ee53 3 weeks ago 1.05GB 2025-06-22 20:23:24.662675 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20250530 68be509d15c9 3 weeks ago 1.05GB 2025-06-22 20:23:24.662685 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.0.1.20250530 47425e7b5ce1 3 weeks ago 1.3GB 2025-06-22 20:23:24.662696 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.0.1.20250530 9fd4859cd2ca 3 weeks ago 1.29GB 2025-06-22 20:23:24.662706 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.0.1.20250530 65e1e2f12329 3 weeks ago 1.42GB 2025-06-22 20:23:24.662717 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.0.1.20250530 ded754c3e240 3 weeks ago 1.29GB 2025-06-22 20:23:24.662727 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20250530 dc06d9c53ec5 3 weeks ago 1.06GB 2025-06-22 20:23:24.662738 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20250530 450ccd1a2872 3 weeks ago 1.06GB 2025-06-22 20:23:24.662748 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20250530 2f34913753bd 3 weeks ago 1.06GB 2025-06-22 20:23:24.662759 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20250530 fe53c77abc4a 3 weeks ago 1.11GB 2025-06-22 20:23:24.662769 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20250530 0419c85d82ab 3 weeks ago 1.13GB 2025-06-22 20:23:24.662780 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20250530 7eb5295204d1 3 weeks ago 1.11GB 2025-06-22 20:23:24.662790 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.2.20250530 6a22761bd4f3 3 weeks ago 947MB 2025-06-22 20:23:24.662800 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.2.20250530 63ebc77afae1 3 weeks ago 947MB 2025-06-22 20:23:24.662811 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.2.20250530 694606382374 3 weeks ago 948MB 2025-06-22 20:23:24.662830 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.2.20250530 5b8b94e53819 3 weeks ago 948MB 2025-06-22 20:23:24.662841 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 6 weeks ago 1.27GB 2025-06-22 20:23:24.895200 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2025-06-22 20:23:24.903947 | orchestrator | + set -e 2025-06-22 20:23:24.904037 | orchestrator | + source /opt/manager-vars.sh 2025-06-22 20:23:24.905370 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-06-22 20:23:24.905437 | orchestrator | ++ NUMBER_OF_NODES=6 2025-06-22 20:23:24.905476 | orchestrator | ++ export CEPH_VERSION=reef 2025-06-22 20:23:24.905487 | orchestrator | ++ CEPH_VERSION=reef 2025-06-22 20:23:24.905497 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-06-22 20:23:24.905508 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-06-22 20:23:24.905517 | orchestrator | ++ export MANAGER_VERSION=9.1.0 2025-06-22 20:23:24.905527 | orchestrator | ++ MANAGER_VERSION=9.1.0 2025-06-22 20:23:24.905537 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-06-22 20:23:24.905546 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-06-22 20:23:24.905556 | orchestrator | ++ export ARA=false 2025-06-22 20:23:24.905565 | orchestrator | ++ ARA=false 2025-06-22 20:23:24.905575 | orchestrator | ++ export DEPLOY_MODE=manager 2025-06-22 20:23:24.905584 | orchestrator | ++ DEPLOY_MODE=manager 2025-06-22 20:23:24.905595 | orchestrator | ++ export TEMPEST=false 2025-06-22 20:23:24.905604 | orchestrator | ++ TEMPEST=false 2025-06-22 20:23:24.905613 | orchestrator | ++ export IS_ZUUL=true 2025-06-22 20:23:24.905623 | orchestrator | ++ IS_ZUUL=true 2025-06-22 20:23:24.905632 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.14 2025-06-22 20:23:24.905642 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.14 2025-06-22 20:23:24.905651 | orchestrator | ++ export EXTERNAL_API=false 2025-06-22 20:23:24.905661 | orchestrator | ++ EXTERNAL_API=false 2025-06-22 20:23:24.905670 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-06-22 20:23:24.905679 | orchestrator | ++ IMAGE_USER=ubuntu 2025-06-22 20:23:24.905689 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-06-22 20:23:24.905698 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-06-22 20:23:24.905708 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-06-22 20:23:24.905717 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-06-22 20:23:24.905726 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-06-22 20:23:24.905736 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2025-06-22 20:23:24.915922 | orchestrator | + set -e 2025-06-22 20:23:24.916026 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-22 20:23:24.916047 | orchestrator | ++ export INTERACTIVE=false 2025-06-22 20:23:24.916065 | orchestrator | ++ INTERACTIVE=false 2025-06-22 20:23:24.916081 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-22 20:23:24.916098 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-22 20:23:24.916114 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-06-22 20:23:24.916713 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-06-22 20:23:24.920068 | orchestrator | 2025-06-22 20:23:24.920150 | orchestrator | # Ceph status 2025-06-22 20:23:24.920174 | orchestrator | 2025-06-22 20:23:24.920194 | orchestrator | ++ export MANAGER_VERSION=9.1.0 2025-06-22 20:23:24.920206 | orchestrator | ++ MANAGER_VERSION=9.1.0 2025-06-22 20:23:24.920217 | orchestrator | + echo 2025-06-22 20:23:24.920233 | orchestrator | + echo '# Ceph status' 2025-06-22 20:23:24.920244 | orchestrator | + echo 2025-06-22 20:23:24.920256 | orchestrator | + ceph -s 2025-06-22 20:23:25.487012 | orchestrator | cluster: 2025-06-22 20:23:25.487142 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2025-06-22 20:23:25.487163 | orchestrator | health: HEALTH_OK 2025-06-22 20:23:25.487176 | orchestrator | 2025-06-22 20:23:25.487194 | orchestrator | services: 2025-06-22 20:23:25.487213 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 28m) 2025-06-22 20:23:25.487233 | orchestrator | mgr: testbed-node-2(active, since 16m), standbys: testbed-node-1, testbed-node-0 2025-06-22 20:23:25.487252 | orchestrator | mds: 1/1 daemons up, 2 standby 2025-06-22 20:23:25.487271 | orchestrator | osd: 6 osds: 6 up (since 24m), 6 in (since 25m) 2025-06-22 20:23:25.487289 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2025-06-22 20:23:25.487308 | orchestrator | 2025-06-22 20:23:25.487327 | orchestrator | data: 2025-06-22 20:23:25.487345 | orchestrator | volumes: 1/1 healthy 2025-06-22 20:23:25.487364 | orchestrator | pools: 14 pools, 401 pgs 2025-06-22 20:23:25.487382 | orchestrator | objects: 524 objects, 2.2 GiB 2025-06-22 20:23:25.487401 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2025-06-22 20:23:25.487420 | orchestrator | pgs: 401 active+clean 2025-06-22 20:23:25.487439 | orchestrator | 2025-06-22 20:23:25.532380 | orchestrator | 2025-06-22 20:23:25.532525 | orchestrator | # Ceph versions 2025-06-22 20:23:25.532544 | orchestrator | 2025-06-22 20:23:25.532556 | orchestrator | + echo 2025-06-22 20:23:25.532568 | orchestrator | + echo '# Ceph versions' 2025-06-22 20:23:25.532580 | orchestrator | + echo 2025-06-22 20:23:25.532591 | orchestrator | + ceph versions 2025-06-22 20:23:26.146263 | orchestrator | { 2025-06-22 20:23:26.146364 | orchestrator | "mon": { 2025-06-22 20:23:26.146405 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-06-22 20:23:26.146418 | orchestrator | }, 2025-06-22 20:23:26.146428 | orchestrator | "mgr": { 2025-06-22 20:23:26.146439 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-06-22 20:23:26.146484 | orchestrator | }, 2025-06-22 20:23:26.146495 | orchestrator | "osd": { 2025-06-22 20:23:26.146505 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2025-06-22 20:23:26.146516 | orchestrator | }, 2025-06-22 20:23:26.146539 | orchestrator | "mds": { 2025-06-22 20:23:26.146551 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-06-22 20:23:26.146562 | orchestrator | }, 2025-06-22 20:23:26.146572 | orchestrator | "rgw": { 2025-06-22 20:23:26.146583 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-06-22 20:23:26.146593 | orchestrator | }, 2025-06-22 20:23:26.146604 | orchestrator | "overall": { 2025-06-22 20:23:26.146615 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2025-06-22 20:23:26.146626 | orchestrator | } 2025-06-22 20:23:26.146636 | orchestrator | } 2025-06-22 20:23:26.194979 | orchestrator | 2025-06-22 20:23:26.195061 | orchestrator | # Ceph OSD tree 2025-06-22 20:23:26.195072 | orchestrator | 2025-06-22 20:23:26.195083 | orchestrator | + echo 2025-06-22 20:23:26.195094 | orchestrator | + echo '# Ceph OSD tree' 2025-06-22 20:23:26.195105 | orchestrator | + echo 2025-06-22 20:23:26.195117 | orchestrator | + ceph osd df tree 2025-06-22 20:23:26.737781 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2025-06-22 20:23:26.737911 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 434 MiB 113 GiB 5.92 1.00 - root default 2025-06-22 20:23:26.737927 | orchestrator | -3 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-3 2025-06-22 20:23:26.737939 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.3 GiB 1 KiB 70 MiB 19 GiB 6.81 1.15 201 up osd.0 2025-06-22 20:23:26.737950 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 1.0 GiB 955 MiB 1 KiB 74 MiB 19 GiB 5.03 0.85 189 up osd.5 2025-06-22 20:23:26.737961 | orchestrator | -5 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 147 MiB 38 GiB 5.93 1.00 - host testbed-node-4 2025-06-22 20:23:26.737971 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 789 MiB 715 MiB 1 KiB 74 MiB 19 GiB 3.86 0.65 176 up osd.1 2025-06-22 20:23:26.737981 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.6 GiB 1.5 GiB 1 KiB 74 MiB 18 GiB 8.00 1.35 216 up osd.3 2025-06-22 20:23:26.737992 | orchestrator | -7 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-5 2025-06-22 20:23:26.738003 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.3 GiB 1 KiB 70 MiB 19 GiB 7.04 1.19 198 up osd.2 2025-06-22 20:23:26.738013 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 981 MiB 907 MiB 1 KiB 74 MiB 19 GiB 4.79 0.81 190 up osd.4 2025-06-22 20:23:26.738074 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 434 MiB 113 GiB 5.92 2025-06-22 20:23:26.738085 | orchestrator | MIN/MAX VAR: 0.65/1.35 STDDEV: 1.45 2025-06-22 20:23:26.782803 | orchestrator | 2025-06-22 20:23:26.782913 | orchestrator | # Ceph monitor status 2025-06-22 20:23:26.782929 | orchestrator | 2025-06-22 20:23:26.782940 | orchestrator | + echo 2025-06-22 20:23:26.782952 | orchestrator | + echo '# Ceph monitor status' 2025-06-22 20:23:26.782962 | orchestrator | + echo 2025-06-22 20:23:26.782973 | orchestrator | + ceph mon stat 2025-06-22 20:23:27.381992 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 8, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2025-06-22 20:23:27.429123 | orchestrator | 2025-06-22 20:23:27.429220 | orchestrator | # Ceph quorum status 2025-06-22 20:23:27.429235 | orchestrator | 2025-06-22 20:23:27.429247 | orchestrator | + echo 2025-06-22 20:23:27.429258 | orchestrator | + echo '# Ceph quorum status' 2025-06-22 20:23:27.429269 | orchestrator | + echo 2025-06-22 20:23:27.429763 | orchestrator | + ceph quorum_status 2025-06-22 20:23:27.429919 | orchestrator | + jq 2025-06-22 20:23:28.075958 | orchestrator | { 2025-06-22 20:23:28.076085 | orchestrator | "election_epoch": 8, 2025-06-22 20:23:28.076113 | orchestrator | "quorum": [ 2025-06-22 20:23:28.076133 | orchestrator | 0, 2025-06-22 20:23:28.076146 | orchestrator | 1, 2025-06-22 20:23:28.076156 | orchestrator | 2 2025-06-22 20:23:28.076167 | orchestrator | ], 2025-06-22 20:23:28.076178 | orchestrator | "quorum_names": [ 2025-06-22 20:23:28.076188 | orchestrator | "testbed-node-0", 2025-06-22 20:23:28.076199 | orchestrator | "testbed-node-1", 2025-06-22 20:23:28.076209 | orchestrator | "testbed-node-2" 2025-06-22 20:23:28.076220 | orchestrator | ], 2025-06-22 20:23:28.076231 | orchestrator | "quorum_leader_name": "testbed-node-0", 2025-06-22 20:23:28.076243 | orchestrator | "quorum_age": 1732, 2025-06-22 20:23:28.076253 | orchestrator | "features": { 2025-06-22 20:23:28.076264 | orchestrator | "quorum_con": "4540138322906710015", 2025-06-22 20:23:28.076275 | orchestrator | "quorum_mon": [ 2025-06-22 20:23:28.076285 | orchestrator | "kraken", 2025-06-22 20:23:28.076296 | orchestrator | "luminous", 2025-06-22 20:23:28.076306 | orchestrator | "mimic", 2025-06-22 20:23:28.076317 | orchestrator | "osdmap-prune", 2025-06-22 20:23:28.076327 | orchestrator | "nautilus", 2025-06-22 20:23:28.076338 | orchestrator | "octopus", 2025-06-22 20:23:28.076348 | orchestrator | "pacific", 2025-06-22 20:23:28.076358 | orchestrator | "elector-pinging", 2025-06-22 20:23:28.076369 | orchestrator | "quincy", 2025-06-22 20:23:28.076379 | orchestrator | "reef" 2025-06-22 20:23:28.076390 | orchestrator | ] 2025-06-22 20:23:28.076400 | orchestrator | }, 2025-06-22 20:23:28.076411 | orchestrator | "monmap": { 2025-06-22 20:23:28.076421 | orchestrator | "epoch": 1, 2025-06-22 20:23:28.076432 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2025-06-22 20:23:28.076501 | orchestrator | "modified": "2025-06-22T19:54:18.366921Z", 2025-06-22 20:23:28.076522 | orchestrator | "created": "2025-06-22T19:54:18.366921Z", 2025-06-22 20:23:28.076539 | orchestrator | "min_mon_release": 18, 2025-06-22 20:23:28.076557 | orchestrator | "min_mon_release_name": "reef", 2025-06-22 20:23:28.076576 | orchestrator | "election_strategy": 1, 2025-06-22 20:23:28.076596 | orchestrator | "disallowed_leaders: ": "", 2025-06-22 20:23:28.076615 | orchestrator | "stretch_mode": false, 2025-06-22 20:23:28.076633 | orchestrator | "tiebreaker_mon": "", 2025-06-22 20:23:28.076649 | orchestrator | "removed_ranks: ": "", 2025-06-22 20:23:28.076660 | orchestrator | "features": { 2025-06-22 20:23:28.076678 | orchestrator | "persistent": [ 2025-06-22 20:23:28.076698 | orchestrator | "kraken", 2025-06-22 20:23:28.076725 | orchestrator | "luminous", 2025-06-22 20:23:28.076742 | orchestrator | "mimic", 2025-06-22 20:23:28.076759 | orchestrator | "osdmap-prune", 2025-06-22 20:23:28.076777 | orchestrator | "nautilus", 2025-06-22 20:23:28.076794 | orchestrator | "octopus", 2025-06-22 20:23:28.076813 | orchestrator | "pacific", 2025-06-22 20:23:28.076830 | orchestrator | "elector-pinging", 2025-06-22 20:23:28.076848 | orchestrator | "quincy", 2025-06-22 20:23:28.076867 | orchestrator | "reef" 2025-06-22 20:23:28.076885 | orchestrator | ], 2025-06-22 20:23:28.076904 | orchestrator | "optional": [] 2025-06-22 20:23:28.076916 | orchestrator | }, 2025-06-22 20:23:28.076927 | orchestrator | "mons": [ 2025-06-22 20:23:28.076937 | orchestrator | { 2025-06-22 20:23:28.076948 | orchestrator | "rank": 0, 2025-06-22 20:23:28.076958 | orchestrator | "name": "testbed-node-0", 2025-06-22 20:23:28.076969 | orchestrator | "public_addrs": { 2025-06-22 20:23:28.076979 | orchestrator | "addrvec": [ 2025-06-22 20:23:28.076990 | orchestrator | { 2025-06-22 20:23:28.077000 | orchestrator | "type": "v2", 2025-06-22 20:23:28.077011 | orchestrator | "addr": "192.168.16.10:3300", 2025-06-22 20:23:28.077022 | orchestrator | "nonce": 0 2025-06-22 20:23:28.077032 | orchestrator | }, 2025-06-22 20:23:28.077043 | orchestrator | { 2025-06-22 20:23:28.077053 | orchestrator | "type": "v1", 2025-06-22 20:23:28.077064 | orchestrator | "addr": "192.168.16.10:6789", 2025-06-22 20:23:28.077074 | orchestrator | "nonce": 0 2025-06-22 20:23:28.077085 | orchestrator | } 2025-06-22 20:23:28.077095 | orchestrator | ] 2025-06-22 20:23:28.077106 | orchestrator | }, 2025-06-22 20:23:28.077117 | orchestrator | "addr": "192.168.16.10:6789/0", 2025-06-22 20:23:28.077155 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2025-06-22 20:23:28.077166 | orchestrator | "priority": 0, 2025-06-22 20:23:28.077176 | orchestrator | "weight": 0, 2025-06-22 20:23:28.077187 | orchestrator | "crush_location": "{}" 2025-06-22 20:23:28.077197 | orchestrator | }, 2025-06-22 20:23:28.077207 | orchestrator | { 2025-06-22 20:23:28.077218 | orchestrator | "rank": 1, 2025-06-22 20:23:28.077228 | orchestrator | "name": "testbed-node-1", 2025-06-22 20:23:28.077239 | orchestrator | "public_addrs": { 2025-06-22 20:23:28.077249 | orchestrator | "addrvec": [ 2025-06-22 20:23:28.077260 | orchestrator | { 2025-06-22 20:23:28.077271 | orchestrator | "type": "v2", 2025-06-22 20:23:28.077281 | orchestrator | "addr": "192.168.16.11:3300", 2025-06-22 20:23:28.077292 | orchestrator | "nonce": 0 2025-06-22 20:23:28.077302 | orchestrator | }, 2025-06-22 20:23:28.077313 | orchestrator | { 2025-06-22 20:23:28.077324 | orchestrator | "type": "v1", 2025-06-22 20:23:28.077334 | orchestrator | "addr": "192.168.16.11:6789", 2025-06-22 20:23:28.077345 | orchestrator | "nonce": 0 2025-06-22 20:23:28.077355 | orchestrator | } 2025-06-22 20:23:28.077366 | orchestrator | ] 2025-06-22 20:23:28.077376 | orchestrator | }, 2025-06-22 20:23:28.077387 | orchestrator | "addr": "192.168.16.11:6789/0", 2025-06-22 20:23:28.077398 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2025-06-22 20:23:28.077409 | orchestrator | "priority": 0, 2025-06-22 20:23:28.077419 | orchestrator | "weight": 0, 2025-06-22 20:23:28.077429 | orchestrator | "crush_location": "{}" 2025-06-22 20:23:28.077440 | orchestrator | }, 2025-06-22 20:23:28.077486 | orchestrator | { 2025-06-22 20:23:28.077498 | orchestrator | "rank": 2, 2025-06-22 20:23:28.077509 | orchestrator | "name": "testbed-node-2", 2025-06-22 20:23:28.077520 | orchestrator | "public_addrs": { 2025-06-22 20:23:28.077530 | orchestrator | "addrvec": [ 2025-06-22 20:23:28.077541 | orchestrator | { 2025-06-22 20:23:28.077555 | orchestrator | "type": "v2", 2025-06-22 20:23:28.077573 | orchestrator | "addr": "192.168.16.12:3300", 2025-06-22 20:23:28.077590 | orchestrator | "nonce": 0 2025-06-22 20:23:28.077609 | orchestrator | }, 2025-06-22 20:23:28.077629 | orchestrator | { 2025-06-22 20:23:28.077648 | orchestrator | "type": "v1", 2025-06-22 20:23:28.077667 | orchestrator | "addr": "192.168.16.12:6789", 2025-06-22 20:23:28.077678 | orchestrator | "nonce": 0 2025-06-22 20:23:28.077689 | orchestrator | } 2025-06-22 20:23:28.077699 | orchestrator | ] 2025-06-22 20:23:28.077710 | orchestrator | }, 2025-06-22 20:23:28.077720 | orchestrator | "addr": "192.168.16.12:6789/0", 2025-06-22 20:23:28.077731 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2025-06-22 20:23:28.077742 | orchestrator | "priority": 0, 2025-06-22 20:23:28.077752 | orchestrator | "weight": 0, 2025-06-22 20:23:28.077763 | orchestrator | "crush_location": "{}" 2025-06-22 20:23:28.077773 | orchestrator | } 2025-06-22 20:23:28.077784 | orchestrator | ] 2025-06-22 20:23:28.077794 | orchestrator | } 2025-06-22 20:23:28.077805 | orchestrator | } 2025-06-22 20:23:28.077830 | orchestrator | 2025-06-22 20:23:28.077841 | orchestrator | # Ceph free space status 2025-06-22 20:23:28.077852 | orchestrator | 2025-06-22 20:23:28.077863 | orchestrator | + echo 2025-06-22 20:23:28.077873 | orchestrator | + echo '# Ceph free space status' 2025-06-22 20:23:28.077884 | orchestrator | + echo 2025-06-22 20:23:28.077895 | orchestrator | + ceph df 2025-06-22 20:23:28.657147 | orchestrator | --- RAW STORAGE --- 2025-06-22 20:23:28.657269 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2025-06-22 20:23:28.657304 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2025-06-22 20:23:28.657318 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2025-06-22 20:23:28.657329 | orchestrator | 2025-06-22 20:23:28.657340 | orchestrator | --- POOLS --- 2025-06-22 20:23:28.657352 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2025-06-22 20:23:28.657364 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 52 GiB 2025-06-22 20:23:28.657374 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2025-06-22 20:23:28.657385 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2025-06-22 20:23:28.657395 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2025-06-22 20:23:28.657406 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2025-06-22 20:23:28.657439 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2025-06-22 20:23:28.657482 | orchestrator | default.rgw.log 7 32 3.6 KiB 177 408 KiB 0 35 GiB 2025-06-22 20:23:28.657494 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2025-06-22 20:23:28.657504 | orchestrator | .rgw.root 9 32 3.9 KiB 8 64 KiB 0 52 GiB 2025-06-22 20:23:28.657515 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2025-06-22 20:23:28.657525 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2025-06-22 20:23:28.657536 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 6.00 35 GiB 2025-06-22 20:23:28.657547 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2025-06-22 20:23:28.657558 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2025-06-22 20:23:28.699940 | orchestrator | ++ semver 9.1.0 5.0.0 2025-06-22 20:23:28.752418 | orchestrator | + [[ 1 -eq -1 ]] 2025-06-22 20:23:28.752554 | orchestrator | + [[ ! -e /etc/redhat-release ]] 2025-06-22 20:23:28.752570 | orchestrator | + osism apply facts 2025-06-22 20:23:30.499984 | orchestrator | Registering Redlock._acquired_script 2025-06-22 20:23:30.500243 | orchestrator | Registering Redlock._extend_script 2025-06-22 20:23:30.500269 | orchestrator | Registering Redlock._release_script 2025-06-22 20:23:30.560797 | orchestrator | 2025-06-22 20:23:30 | INFO  | Task c81502bf-623f-4739-8a57-f4f61abfba4c (facts) was prepared for execution. 2025-06-22 20:23:30.560904 | orchestrator | 2025-06-22 20:23:30 | INFO  | It takes a moment until task c81502bf-623f-4739-8a57-f4f61abfba4c (facts) has been started and output is visible here. 2025-06-22 20:23:34.732867 | orchestrator | 2025-06-22 20:23:34.733315 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-06-22 20:23:34.734301 | orchestrator | 2025-06-22 20:23:34.736166 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-06-22 20:23:34.737198 | orchestrator | Sunday 22 June 2025 20:23:34 +0000 (0:00:00.284) 0:00:00.284 *********** 2025-06-22 20:23:36.324534 | orchestrator | ok: [testbed-manager] 2025-06-22 20:23:36.325532 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:23:36.328265 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:23:36.328332 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:23:36.328658 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:23:36.329403 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:23:36.330528 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:23:36.331160 | orchestrator | 2025-06-22 20:23:36.332091 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-06-22 20:23:36.332954 | orchestrator | Sunday 22 June 2025 20:23:36 +0000 (0:00:01.591) 0:00:01.875 *********** 2025-06-22 20:23:36.490519 | orchestrator | skipping: [testbed-manager] 2025-06-22 20:23:36.571358 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:23:36.654232 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:23:36.735720 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:23:36.817114 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:23:37.563737 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:23:37.564256 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:23:37.564946 | orchestrator | 2025-06-22 20:23:37.565371 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-06-22 20:23:37.569225 | orchestrator | 2025-06-22 20:23:37.569252 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-06-22 20:23:37.569263 | orchestrator | Sunday 22 June 2025 20:23:37 +0000 (0:00:01.243) 0:00:03.119 *********** 2025-06-22 20:23:42.991943 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:23:42.993093 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:23:42.993722 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:23:42.995148 | orchestrator | ok: [testbed-manager] 2025-06-22 20:23:42.997803 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:23:42.999410 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:23:43.000657 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:23:43.001406 | orchestrator | 2025-06-22 20:23:43.003068 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-06-22 20:23:43.003863 | orchestrator | 2025-06-22 20:23:43.005223 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-06-22 20:23:43.006591 | orchestrator | Sunday 22 June 2025 20:23:42 +0000 (0:00:05.427) 0:00:08.547 *********** 2025-06-22 20:23:43.160336 | orchestrator | skipping: [testbed-manager] 2025-06-22 20:23:43.238757 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:23:43.318772 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:23:43.404441 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:23:43.486927 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:23:43.531438 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:23:43.532038 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:23:43.533335 | orchestrator | 2025-06-22 20:23:43.535061 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 20:23:43.535359 | orchestrator | 2025-06-22 20:23:43 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-22 20:23:43.535584 | orchestrator | 2025-06-22 20:23:43 | INFO  | Please wait and do not abort execution. 2025-06-22 20:23:43.536810 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 20:23:43.537160 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 20:23:43.537574 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 20:23:43.538198 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 20:23:43.538710 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 20:23:43.539131 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 20:23:43.539584 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 20:23:43.540034 | orchestrator | 2025-06-22 20:23:43.540407 | orchestrator | 2025-06-22 20:23:43.540944 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 20:23:43.541300 | orchestrator | Sunday 22 June 2025 20:23:43 +0000 (0:00:00.540) 0:00:09.087 *********** 2025-06-22 20:23:43.542043 | orchestrator | =============================================================================== 2025-06-22 20:23:43.542964 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.43s 2025-06-22 20:23:43.543697 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.59s 2025-06-22 20:23:43.544497 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.24s 2025-06-22 20:23:43.545231 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.54s 2025-06-22 20:23:44.186822 | orchestrator | + osism validate ceph-mons 2025-06-22 20:23:45.934901 | orchestrator | Registering Redlock._acquired_script 2025-06-22 20:23:45.935007 | orchestrator | Registering Redlock._extend_script 2025-06-22 20:23:45.935026 | orchestrator | Registering Redlock._release_script 2025-06-22 20:24:04.841742 | orchestrator | 2025-06-22 20:24:04.841860 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2025-06-22 20:24:04.841877 | orchestrator | 2025-06-22 20:24:04.841889 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-06-22 20:24:04.841901 | orchestrator | Sunday 22 June 2025 20:23:50 +0000 (0:00:00.419) 0:00:00.419 *********** 2025-06-22 20:24:04.841936 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-22 20:24:04.841948 | orchestrator | 2025-06-22 20:24:04.841959 | orchestrator | TASK [Create report output directory] ****************************************** 2025-06-22 20:24:04.841969 | orchestrator | Sunday 22 June 2025 20:23:50 +0000 (0:00:00.640) 0:00:01.059 *********** 2025-06-22 20:24:04.841980 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-22 20:24:04.841990 | orchestrator | 2025-06-22 20:24:04.842001 | orchestrator | TASK [Define report vars] ****************************************************** 2025-06-22 20:24:04.842012 | orchestrator | Sunday 22 June 2025 20:23:51 +0000 (0:00:00.828) 0:00:01.887 *********** 2025-06-22 20:24:04.842076 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:24:04.842088 | orchestrator | 2025-06-22 20:24:04.842099 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-06-22 20:24:04.842109 | orchestrator | Sunday 22 June 2025 20:23:51 +0000 (0:00:00.241) 0:00:02.129 *********** 2025-06-22 20:24:04.842120 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:24:04.842132 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:24:04.842142 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:24:04.842153 | orchestrator | 2025-06-22 20:24:04.842163 | orchestrator | TASK [Get container info] ****************************************************** 2025-06-22 20:24:04.842174 | orchestrator | Sunday 22 June 2025 20:23:52 +0000 (0:00:00.284) 0:00:02.413 *********** 2025-06-22 20:24:04.842184 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:24:04.842195 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:24:04.842205 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:24:04.842216 | orchestrator | 2025-06-22 20:24:04.842226 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-06-22 20:24:04.842237 | orchestrator | Sunday 22 June 2025 20:23:53 +0000 (0:00:00.984) 0:00:03.398 *********** 2025-06-22 20:24:04.842247 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:24:04.842259 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:24:04.842271 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:24:04.842282 | orchestrator | 2025-06-22 20:24:04.842295 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-06-22 20:24:04.842307 | orchestrator | Sunday 22 June 2025 20:23:53 +0000 (0:00:00.261) 0:00:03.660 *********** 2025-06-22 20:24:04.842319 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:24:04.842331 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:24:04.842342 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:24:04.842354 | orchestrator | 2025-06-22 20:24:04.842366 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-22 20:24:04.842378 | orchestrator | Sunday 22 June 2025 20:23:53 +0000 (0:00:00.420) 0:00:04.081 *********** 2025-06-22 20:24:04.842391 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:24:04.842402 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:24:04.842414 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:24:04.842426 | orchestrator | 2025-06-22 20:24:04.842438 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2025-06-22 20:24:04.842449 | orchestrator | Sunday 22 June 2025 20:23:54 +0000 (0:00:00.267) 0:00:04.349 *********** 2025-06-22 20:24:04.842485 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:24:04.842505 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:24:04.842516 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:24:04.842526 | orchestrator | 2025-06-22 20:24:04.842536 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2025-06-22 20:24:04.842547 | orchestrator | Sunday 22 June 2025 20:23:54 +0000 (0:00:00.261) 0:00:04.610 *********** 2025-06-22 20:24:04.842557 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:24:04.842568 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:24:04.842578 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:24:04.842589 | orchestrator | 2025-06-22 20:24:04.842599 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-22 20:24:04.842610 | orchestrator | Sunday 22 June 2025 20:23:54 +0000 (0:00:00.287) 0:00:04.897 *********** 2025-06-22 20:24:04.842620 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:24:04.842640 | orchestrator | 2025-06-22 20:24:04.842651 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-22 20:24:04.842661 | orchestrator | Sunday 22 June 2025 20:23:55 +0000 (0:00:00.516) 0:00:05.414 *********** 2025-06-22 20:24:04.842672 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:24:04.842682 | orchestrator | 2025-06-22 20:24:04.842693 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-22 20:24:04.842703 | orchestrator | Sunday 22 June 2025 20:23:55 +0000 (0:00:00.227) 0:00:05.642 *********** 2025-06-22 20:24:04.842730 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:24:04.842741 | orchestrator | 2025-06-22 20:24:04.842752 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-22 20:24:04.842763 | orchestrator | Sunday 22 June 2025 20:23:55 +0000 (0:00:00.256) 0:00:05.898 *********** 2025-06-22 20:24:04.842773 | orchestrator | 2025-06-22 20:24:04.842784 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-22 20:24:04.842795 | orchestrator | Sunday 22 June 2025 20:23:55 +0000 (0:00:00.064) 0:00:05.962 *********** 2025-06-22 20:24:04.842805 | orchestrator | 2025-06-22 20:24:04.842816 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-22 20:24:04.842826 | orchestrator | Sunday 22 June 2025 20:23:55 +0000 (0:00:00.062) 0:00:06.025 *********** 2025-06-22 20:24:04.842836 | orchestrator | 2025-06-22 20:24:04.842847 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-22 20:24:04.842857 | orchestrator | Sunday 22 June 2025 20:23:55 +0000 (0:00:00.065) 0:00:06.090 *********** 2025-06-22 20:24:04.842868 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:24:04.842878 | orchestrator | 2025-06-22 20:24:04.842889 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-06-22 20:24:04.842899 | orchestrator | Sunday 22 June 2025 20:23:56 +0000 (0:00:00.232) 0:00:06.323 *********** 2025-06-22 20:24:04.842910 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:24:04.842920 | orchestrator | 2025-06-22 20:24:04.842947 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2025-06-22 20:24:04.842959 | orchestrator | Sunday 22 June 2025 20:23:56 +0000 (0:00:00.231) 0:00:06.554 *********** 2025-06-22 20:24:04.842969 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:24:04.842980 | orchestrator | 2025-06-22 20:24:04.842990 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2025-06-22 20:24:04.843001 | orchestrator | Sunday 22 June 2025 20:23:56 +0000 (0:00:00.135) 0:00:06.689 *********** 2025-06-22 20:24:04.843011 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:24:04.843022 | orchestrator | 2025-06-22 20:24:04.843032 | orchestrator | TASK [Set quorum test data] **************************************************** 2025-06-22 20:24:04.843043 | orchestrator | Sunday 22 June 2025 20:23:58 +0000 (0:00:01.519) 0:00:08.209 *********** 2025-06-22 20:24:04.843053 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:24:04.843064 | orchestrator | 2025-06-22 20:24:04.843074 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2025-06-22 20:24:04.843085 | orchestrator | Sunday 22 June 2025 20:23:58 +0000 (0:00:00.315) 0:00:08.524 *********** 2025-06-22 20:24:04.843095 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:24:04.843106 | orchestrator | 2025-06-22 20:24:04.843116 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2025-06-22 20:24:04.843127 | orchestrator | Sunday 22 June 2025 20:23:58 +0000 (0:00:00.303) 0:00:08.827 *********** 2025-06-22 20:24:04.843143 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:24:04.843153 | orchestrator | 2025-06-22 20:24:04.843164 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2025-06-22 20:24:04.843175 | orchestrator | Sunday 22 June 2025 20:23:58 +0000 (0:00:00.332) 0:00:09.160 *********** 2025-06-22 20:24:04.843185 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:24:04.843195 | orchestrator | 2025-06-22 20:24:04.843206 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2025-06-22 20:24:04.843216 | orchestrator | Sunday 22 June 2025 20:23:59 +0000 (0:00:00.301) 0:00:09.461 *********** 2025-06-22 20:24:04.843233 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:24:04.843244 | orchestrator | 2025-06-22 20:24:04.843254 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2025-06-22 20:24:04.843265 | orchestrator | Sunday 22 June 2025 20:23:59 +0000 (0:00:00.120) 0:00:09.582 *********** 2025-06-22 20:24:04.843275 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:24:04.843285 | orchestrator | 2025-06-22 20:24:04.843296 | orchestrator | TASK [Prepare status test vars] ************************************************ 2025-06-22 20:24:04.843306 | orchestrator | Sunday 22 June 2025 20:23:59 +0000 (0:00:00.120) 0:00:09.703 *********** 2025-06-22 20:24:04.843317 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:24:04.843327 | orchestrator | 2025-06-22 20:24:04.843338 | orchestrator | TASK [Gather status data] ****************************************************** 2025-06-22 20:24:04.843348 | orchestrator | Sunday 22 June 2025 20:23:59 +0000 (0:00:00.119) 0:00:09.822 *********** 2025-06-22 20:24:04.843359 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:24:04.843369 | orchestrator | 2025-06-22 20:24:04.843380 | orchestrator | TASK [Set health test data] **************************************************** 2025-06-22 20:24:04.843390 | orchestrator | Sunday 22 June 2025 20:24:00 +0000 (0:00:01.314) 0:00:11.137 *********** 2025-06-22 20:24:04.843401 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:24:04.843411 | orchestrator | 2025-06-22 20:24:04.843421 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2025-06-22 20:24:04.843432 | orchestrator | Sunday 22 June 2025 20:24:01 +0000 (0:00:00.306) 0:00:11.444 *********** 2025-06-22 20:24:04.843443 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:24:04.843453 | orchestrator | 2025-06-22 20:24:04.843510 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2025-06-22 20:24:04.843522 | orchestrator | Sunday 22 June 2025 20:24:01 +0000 (0:00:00.141) 0:00:11.585 *********** 2025-06-22 20:24:04.843533 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:24:04.843543 | orchestrator | 2025-06-22 20:24:04.843553 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2025-06-22 20:24:04.843564 | orchestrator | Sunday 22 June 2025 20:24:01 +0000 (0:00:00.140) 0:00:11.726 *********** 2025-06-22 20:24:04.843574 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:24:04.843585 | orchestrator | 2025-06-22 20:24:04.843595 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2025-06-22 20:24:04.843606 | orchestrator | Sunday 22 June 2025 20:24:01 +0000 (0:00:00.142) 0:00:11.869 *********** 2025-06-22 20:24:04.843616 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:24:04.843626 | orchestrator | 2025-06-22 20:24:04.843637 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-06-22 20:24:04.843647 | orchestrator | Sunday 22 June 2025 20:24:01 +0000 (0:00:00.325) 0:00:12.194 *********** 2025-06-22 20:24:04.843658 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-22 20:24:04.843668 | orchestrator | 2025-06-22 20:24:04.843679 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-06-22 20:24:04.843689 | orchestrator | Sunday 22 June 2025 20:24:02 +0000 (0:00:00.254) 0:00:12.449 *********** 2025-06-22 20:24:04.843700 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:24:04.843710 | orchestrator | 2025-06-22 20:24:04.843721 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-22 20:24:04.843731 | orchestrator | Sunday 22 June 2025 20:24:02 +0000 (0:00:00.249) 0:00:12.698 *********** 2025-06-22 20:24:04.843742 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-22 20:24:04.843753 | orchestrator | 2025-06-22 20:24:04.843763 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-22 20:24:04.843774 | orchestrator | Sunday 22 June 2025 20:24:04 +0000 (0:00:01.605) 0:00:14.303 *********** 2025-06-22 20:24:04.843784 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-22 20:24:04.843795 | orchestrator | 2025-06-22 20:24:04.843805 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-22 20:24:04.843823 | orchestrator | Sunday 22 June 2025 20:24:04 +0000 (0:00:00.257) 0:00:14.561 *********** 2025-06-22 20:24:04.843833 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-22 20:24:04.843844 | orchestrator | 2025-06-22 20:24:04.843861 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-22 20:24:07.163985 | orchestrator | Sunday 22 June 2025 20:24:04 +0000 (0:00:00.255) 0:00:14.816 *********** 2025-06-22 20:24:07.164072 | orchestrator | 2025-06-22 20:24:07.164088 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-22 20:24:07.164099 | orchestrator | Sunday 22 June 2025 20:24:04 +0000 (0:00:00.068) 0:00:14.884 *********** 2025-06-22 20:24:07.164110 | orchestrator | 2025-06-22 20:24:07.164121 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-22 20:24:07.164132 | orchestrator | Sunday 22 June 2025 20:24:04 +0000 (0:00:00.067) 0:00:14.952 *********** 2025-06-22 20:24:07.164143 | orchestrator | 2025-06-22 20:24:07.164154 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-06-22 20:24:07.164164 | orchestrator | Sunday 22 June 2025 20:24:04 +0000 (0:00:00.071) 0:00:15.023 *********** 2025-06-22 20:24:07.164175 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-22 20:24:07.164186 | orchestrator | 2025-06-22 20:24:07.164197 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-22 20:24:07.164207 | orchestrator | Sunday 22 June 2025 20:24:06 +0000 (0:00:01.558) 0:00:16.581 *********** 2025-06-22 20:24:07.164218 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-06-22 20:24:07.164243 | orchestrator |  "msg": [ 2025-06-22 20:24:07.164255 | orchestrator |  "Validator run completed.", 2025-06-22 20:24:07.164266 | orchestrator |  "You can find the report file here:", 2025-06-22 20:24:07.164277 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2025-06-22T20:23:50+00:00-report.json", 2025-06-22 20:24:07.164290 | orchestrator |  "on the following host:", 2025-06-22 20:24:07.164300 | orchestrator |  "testbed-manager" 2025-06-22 20:24:07.164311 | orchestrator |  ] 2025-06-22 20:24:07.164322 | orchestrator | } 2025-06-22 20:24:07.164333 | orchestrator | 2025-06-22 20:24:07.164348 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 20:24:07.164360 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-06-22 20:24:07.164372 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 20:24:07.164383 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 20:24:07.164394 | orchestrator | 2025-06-22 20:24:07.164404 | orchestrator | 2025-06-22 20:24:07.164415 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 20:24:07.164426 | orchestrator | Sunday 22 June 2025 20:24:06 +0000 (0:00:00.565) 0:00:17.147 *********** 2025-06-22 20:24:07.164437 | orchestrator | =============================================================================== 2025-06-22 20:24:07.164447 | orchestrator | Aggregate test results step one ----------------------------------------- 1.61s 2025-06-22 20:24:07.164458 | orchestrator | Write report file ------------------------------------------------------- 1.56s 2025-06-22 20:24:07.164501 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.52s 2025-06-22 20:24:07.164512 | orchestrator | Gather status data ------------------------------------------------------ 1.31s 2025-06-22 20:24:07.164523 | orchestrator | Get container info ------------------------------------------------------ 0.98s 2025-06-22 20:24:07.164533 | orchestrator | Create report output directory ------------------------------------------ 0.83s 2025-06-22 20:24:07.164546 | orchestrator | Get timestamp for report file ------------------------------------------- 0.64s 2025-06-22 20:24:07.164578 | orchestrator | Print report file information ------------------------------------------- 0.57s 2025-06-22 20:24:07.164597 | orchestrator | Aggregate test results step one ----------------------------------------- 0.52s 2025-06-22 20:24:07.164617 | orchestrator | Set test result to passed if container is existing ---------------------- 0.42s 2025-06-22 20:24:07.164638 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.33s 2025-06-22 20:24:07.164659 | orchestrator | Pass cluster-health if status is OK (strict) ---------------------------- 0.33s 2025-06-22 20:24:07.164677 | orchestrator | Set quorum test data ---------------------------------------------------- 0.32s 2025-06-22 20:24:07.164689 | orchestrator | Set health test data ---------------------------------------------------- 0.31s 2025-06-22 20:24:07.164703 | orchestrator | Fail quorum test if not all monitors are in quorum ---------------------- 0.30s 2025-06-22 20:24:07.164715 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.30s 2025-06-22 20:24:07.164727 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.29s 2025-06-22 20:24:07.164739 | orchestrator | Prepare test data for container existance test -------------------------- 0.28s 2025-06-22 20:24:07.164752 | orchestrator | Prepare test data ------------------------------------------------------- 0.27s 2025-06-22 20:24:07.164764 | orchestrator | Set test result to failed if container is missing ----------------------- 0.26s 2025-06-22 20:24:07.349727 | orchestrator | + osism validate ceph-mgrs 2025-06-22 20:24:08.851877 | orchestrator | Registering Redlock._acquired_script 2025-06-22 20:24:08.851962 | orchestrator | Registering Redlock._extend_script 2025-06-22 20:24:08.851977 | orchestrator | Registering Redlock._release_script 2025-06-22 20:24:27.761596 | orchestrator | 2025-06-22 20:24:27.761708 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2025-06-22 20:24:27.761723 | orchestrator | 2025-06-22 20:24:27.761734 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-06-22 20:24:27.761746 | orchestrator | Sunday 22 June 2025 20:24:13 +0000 (0:00:00.420) 0:00:00.420 *********** 2025-06-22 20:24:27.761757 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-22 20:24:27.761768 | orchestrator | 2025-06-22 20:24:27.761778 | orchestrator | TASK [Create report output directory] ****************************************** 2025-06-22 20:24:27.761789 | orchestrator | Sunday 22 June 2025 20:24:13 +0000 (0:00:00.630) 0:00:01.050 *********** 2025-06-22 20:24:27.761799 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-22 20:24:27.761810 | orchestrator | 2025-06-22 20:24:27.761820 | orchestrator | TASK [Define report vars] ****************************************************** 2025-06-22 20:24:27.761831 | orchestrator | Sunday 22 June 2025 20:24:14 +0000 (0:00:00.919) 0:00:01.970 *********** 2025-06-22 20:24:27.761841 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:24:27.761853 | orchestrator | 2025-06-22 20:24:27.761864 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-06-22 20:24:27.761874 | orchestrator | Sunday 22 June 2025 20:24:14 +0000 (0:00:00.227) 0:00:02.197 *********** 2025-06-22 20:24:27.761885 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:24:27.761895 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:24:27.761905 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:24:27.761916 | orchestrator | 2025-06-22 20:24:27.761926 | orchestrator | TASK [Get container info] ****************************************************** 2025-06-22 20:24:27.761936 | orchestrator | Sunday 22 June 2025 20:24:15 +0000 (0:00:00.299) 0:00:02.496 *********** 2025-06-22 20:24:27.761947 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:24:27.761957 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:24:27.761968 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:24:27.761978 | orchestrator | 2025-06-22 20:24:27.761988 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-06-22 20:24:27.762009 | orchestrator | Sunday 22 June 2025 20:24:16 +0000 (0:00:01.014) 0:00:03.511 *********** 2025-06-22 20:24:27.762102 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:24:27.762114 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:24:27.762144 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:24:27.762155 | orchestrator | 2025-06-22 20:24:27.762166 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-06-22 20:24:27.762177 | orchestrator | Sunday 22 June 2025 20:24:16 +0000 (0:00:00.280) 0:00:03.792 *********** 2025-06-22 20:24:27.762187 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:24:27.762198 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:24:27.762211 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:24:27.762229 | orchestrator | 2025-06-22 20:24:27.762247 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-22 20:24:27.762265 | orchestrator | Sunday 22 June 2025 20:24:16 +0000 (0:00:00.496) 0:00:04.289 *********** 2025-06-22 20:24:27.762283 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:24:27.762301 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:24:27.762318 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:24:27.762336 | orchestrator | 2025-06-22 20:24:27.762354 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2025-06-22 20:24:27.762372 | orchestrator | Sunday 22 June 2025 20:24:17 +0000 (0:00:00.329) 0:00:04.618 *********** 2025-06-22 20:24:27.762391 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:24:27.762404 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:24:27.762415 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:24:27.762425 | orchestrator | 2025-06-22 20:24:27.762436 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2025-06-22 20:24:27.762446 | orchestrator | Sunday 22 June 2025 20:24:17 +0000 (0:00:00.306) 0:00:04.925 *********** 2025-06-22 20:24:27.762457 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:24:27.762507 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:24:27.762528 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:24:27.762548 | orchestrator | 2025-06-22 20:24:27.762630 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-22 20:24:27.762647 | orchestrator | Sunday 22 June 2025 20:24:17 +0000 (0:00:00.322) 0:00:05.247 *********** 2025-06-22 20:24:27.762665 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:24:27.762682 | orchestrator | 2025-06-22 20:24:27.762700 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-22 20:24:27.762720 | orchestrator | Sunday 22 June 2025 20:24:18 +0000 (0:00:00.618) 0:00:05.865 *********** 2025-06-22 20:24:27.762738 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:24:27.762756 | orchestrator | 2025-06-22 20:24:27.762775 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-22 20:24:27.762793 | orchestrator | Sunday 22 June 2025 20:24:18 +0000 (0:00:00.266) 0:00:06.132 *********** 2025-06-22 20:24:27.762811 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:24:27.762830 | orchestrator | 2025-06-22 20:24:27.762849 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-22 20:24:27.762866 | orchestrator | Sunday 22 June 2025 20:24:19 +0000 (0:00:00.237) 0:00:06.369 *********** 2025-06-22 20:24:27.762885 | orchestrator | 2025-06-22 20:24:27.762903 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-22 20:24:27.762914 | orchestrator | Sunday 22 June 2025 20:24:19 +0000 (0:00:00.074) 0:00:06.443 *********** 2025-06-22 20:24:27.762924 | orchestrator | 2025-06-22 20:24:27.762935 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-22 20:24:27.762945 | orchestrator | Sunday 22 June 2025 20:24:19 +0000 (0:00:00.068) 0:00:06.512 *********** 2025-06-22 20:24:27.762956 | orchestrator | 2025-06-22 20:24:27.762966 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-22 20:24:27.762976 | orchestrator | Sunday 22 June 2025 20:24:19 +0000 (0:00:00.073) 0:00:06.585 *********** 2025-06-22 20:24:27.762986 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:24:27.762997 | orchestrator | 2025-06-22 20:24:27.763007 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-06-22 20:24:27.763018 | orchestrator | Sunday 22 June 2025 20:24:19 +0000 (0:00:00.236) 0:00:06.822 *********** 2025-06-22 20:24:27.763028 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:24:27.763051 | orchestrator | 2025-06-22 20:24:27.763085 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2025-06-22 20:24:27.763096 | orchestrator | Sunday 22 June 2025 20:24:19 +0000 (0:00:00.243) 0:00:07.066 *********** 2025-06-22 20:24:27.763107 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:24:27.763117 | orchestrator | 2025-06-22 20:24:27.763133 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2025-06-22 20:24:27.763150 | orchestrator | Sunday 22 June 2025 20:24:19 +0000 (0:00:00.121) 0:00:07.187 *********** 2025-06-22 20:24:27.763176 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:24:27.763196 | orchestrator | 2025-06-22 20:24:27.763213 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2025-06-22 20:24:27.763229 | orchestrator | Sunday 22 June 2025 20:24:21 +0000 (0:00:01.992) 0:00:09.180 *********** 2025-06-22 20:24:27.763246 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:24:27.763263 | orchestrator | 2025-06-22 20:24:27.763280 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2025-06-22 20:24:27.763297 | orchestrator | Sunday 22 June 2025 20:24:22 +0000 (0:00:00.242) 0:00:09.423 *********** 2025-06-22 20:24:27.763313 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:24:27.763331 | orchestrator | 2025-06-22 20:24:27.763349 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2025-06-22 20:24:27.763367 | orchestrator | Sunday 22 June 2025 20:24:22 +0000 (0:00:00.762) 0:00:10.185 *********** 2025-06-22 20:24:27.763385 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:24:27.763404 | orchestrator | 2025-06-22 20:24:27.763422 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2025-06-22 20:24:27.763440 | orchestrator | Sunday 22 June 2025 20:24:22 +0000 (0:00:00.127) 0:00:10.313 *********** 2025-06-22 20:24:27.763458 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:24:27.763502 | orchestrator | 2025-06-22 20:24:27.763531 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-06-22 20:24:27.763546 | orchestrator | Sunday 22 June 2025 20:24:23 +0000 (0:00:00.137) 0:00:10.451 *********** 2025-06-22 20:24:27.763558 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-22 20:24:27.763569 | orchestrator | 2025-06-22 20:24:27.763579 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-06-22 20:24:27.763590 | orchestrator | Sunday 22 June 2025 20:24:23 +0000 (0:00:00.256) 0:00:10.708 *********** 2025-06-22 20:24:27.763600 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:24:27.763611 | orchestrator | 2025-06-22 20:24:27.763622 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-22 20:24:27.763641 | orchestrator | Sunday 22 June 2025 20:24:23 +0000 (0:00:00.261) 0:00:10.970 *********** 2025-06-22 20:24:27.763658 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-22 20:24:27.763676 | orchestrator | 2025-06-22 20:24:27.763694 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-22 20:24:27.763713 | orchestrator | Sunday 22 June 2025 20:24:24 +0000 (0:00:01.286) 0:00:12.257 *********** 2025-06-22 20:24:27.763731 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-22 20:24:27.763749 | orchestrator | 2025-06-22 20:24:27.763760 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-22 20:24:27.763770 | orchestrator | Sunday 22 June 2025 20:24:25 +0000 (0:00:00.270) 0:00:12.527 *********** 2025-06-22 20:24:27.763781 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-22 20:24:27.763791 | orchestrator | 2025-06-22 20:24:27.763802 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-22 20:24:27.763812 | orchestrator | Sunday 22 June 2025 20:24:25 +0000 (0:00:00.264) 0:00:12.791 *********** 2025-06-22 20:24:27.763823 | orchestrator | 2025-06-22 20:24:27.763833 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-22 20:24:27.763844 | orchestrator | Sunday 22 June 2025 20:24:25 +0000 (0:00:00.071) 0:00:12.863 *********** 2025-06-22 20:24:27.763865 | orchestrator | 2025-06-22 20:24:27.763876 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-22 20:24:27.763886 | orchestrator | Sunday 22 June 2025 20:24:25 +0000 (0:00:00.071) 0:00:12.935 *********** 2025-06-22 20:24:27.763897 | orchestrator | 2025-06-22 20:24:27.763907 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-06-22 20:24:27.763918 | orchestrator | Sunday 22 June 2025 20:24:25 +0000 (0:00:00.069) 0:00:13.004 *********** 2025-06-22 20:24:27.763928 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-06-22 20:24:27.763938 | orchestrator | 2025-06-22 20:24:27.763949 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-22 20:24:27.763960 | orchestrator | Sunday 22 June 2025 20:24:27 +0000 (0:00:01.694) 0:00:14.699 *********** 2025-06-22 20:24:27.763970 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-06-22 20:24:27.763981 | orchestrator |  "msg": [ 2025-06-22 20:24:27.763991 | orchestrator |  "Validator run completed.", 2025-06-22 20:24:27.764002 | orchestrator |  "You can find the report file here:", 2025-06-22 20:24:27.764013 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2025-06-22T20:24:13+00:00-report.json", 2025-06-22 20:24:27.764024 | orchestrator |  "on the following host:", 2025-06-22 20:24:27.764035 | orchestrator |  "testbed-manager" 2025-06-22 20:24:27.764045 | orchestrator |  ] 2025-06-22 20:24:27.764056 | orchestrator | } 2025-06-22 20:24:27.764067 | orchestrator | 2025-06-22 20:24:27.764078 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 20:24:27.764089 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-06-22 20:24:27.764102 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 20:24:27.764125 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 20:24:28.058847 | orchestrator | 2025-06-22 20:24:28.058957 | orchestrator | 2025-06-22 20:24:28.058974 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 20:24:28.058988 | orchestrator | Sunday 22 June 2025 20:24:27 +0000 (0:00:00.406) 0:00:15.105 *********** 2025-06-22 20:24:28.058999 | orchestrator | =============================================================================== 2025-06-22 20:24:28.059010 | orchestrator | Gather list of mgr modules ---------------------------------------------- 1.99s 2025-06-22 20:24:28.059021 | orchestrator | Write report file ------------------------------------------------------- 1.69s 2025-06-22 20:24:28.059032 | orchestrator | Aggregate test results step one ----------------------------------------- 1.29s 2025-06-22 20:24:28.059042 | orchestrator | Get container info ------------------------------------------------------ 1.01s 2025-06-22 20:24:28.059053 | orchestrator | Create report output directory ------------------------------------------ 0.92s 2025-06-22 20:24:28.059064 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.76s 2025-06-22 20:24:28.059074 | orchestrator | Get timestamp for report file ------------------------------------------- 0.63s 2025-06-22 20:24:28.059085 | orchestrator | Aggregate test results step one ----------------------------------------- 0.62s 2025-06-22 20:24:28.059095 | orchestrator | Set test result to passed if container is existing ---------------------- 0.50s 2025-06-22 20:24:28.059106 | orchestrator | Print report file information ------------------------------------------- 0.41s 2025-06-22 20:24:28.059117 | orchestrator | Prepare test data ------------------------------------------------------- 0.33s 2025-06-22 20:24:28.059127 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.32s 2025-06-22 20:24:28.059138 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.31s 2025-06-22 20:24:28.059149 | orchestrator | Prepare test data for container existance test -------------------------- 0.30s 2025-06-22 20:24:28.059184 | orchestrator | Set test result to failed if container is missing ----------------------- 0.28s 2025-06-22 20:24:28.059196 | orchestrator | Aggregate test results step two ----------------------------------------- 0.27s 2025-06-22 20:24:28.059206 | orchestrator | Aggregate test results step two ----------------------------------------- 0.27s 2025-06-22 20:24:28.059217 | orchestrator | Aggregate test results step three --------------------------------------- 0.26s 2025-06-22 20:24:28.059228 | orchestrator | Set validation result to failed if a test failed ------------------------ 0.26s 2025-06-22 20:24:28.059239 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.26s 2025-06-22 20:24:28.288558 | orchestrator | + osism validate ceph-osds 2025-06-22 20:24:29.992371 | orchestrator | Registering Redlock._acquired_script 2025-06-22 20:24:29.992518 | orchestrator | Registering Redlock._extend_script 2025-06-22 20:24:29.992535 | orchestrator | Registering Redlock._release_script 2025-06-22 20:24:38.722620 | orchestrator | 2025-06-22 20:24:38.722733 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2025-06-22 20:24:38.722749 | orchestrator | 2025-06-22 20:24:38.722761 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-06-22 20:24:38.722773 | orchestrator | Sunday 22 June 2025 20:24:34 +0000 (0:00:00.446) 0:00:00.446 *********** 2025-06-22 20:24:38.722785 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-22 20:24:38.722796 | orchestrator | 2025-06-22 20:24:38.722807 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-06-22 20:24:38.722817 | orchestrator | Sunday 22 June 2025 20:24:35 +0000 (0:00:00.651) 0:00:01.098 *********** 2025-06-22 20:24:38.722828 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-22 20:24:38.722839 | orchestrator | 2025-06-22 20:24:38.722850 | orchestrator | TASK [Create report output directory] ****************************************** 2025-06-22 20:24:38.722861 | orchestrator | Sunday 22 June 2025 20:24:35 +0000 (0:00:00.482) 0:00:01.580 *********** 2025-06-22 20:24:38.722872 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-22 20:24:38.722883 | orchestrator | 2025-06-22 20:24:38.722893 | orchestrator | TASK [Define report vars] ****************************************************** 2025-06-22 20:24:38.722904 | orchestrator | Sunday 22 June 2025 20:24:36 +0000 (0:00:00.946) 0:00:02.527 *********** 2025-06-22 20:24:38.722915 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:24:38.722927 | orchestrator | 2025-06-22 20:24:38.722938 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-06-22 20:24:38.722948 | orchestrator | Sunday 22 June 2025 20:24:36 +0000 (0:00:00.124) 0:00:02.651 *********** 2025-06-22 20:24:38.722959 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:24:38.722970 | orchestrator | 2025-06-22 20:24:38.722981 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-06-22 20:24:38.722992 | orchestrator | Sunday 22 June 2025 20:24:36 +0000 (0:00:00.137) 0:00:02.789 *********** 2025-06-22 20:24:38.723002 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:24:38.723014 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:24:38.723025 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:24:38.723035 | orchestrator | 2025-06-22 20:24:38.723046 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-06-22 20:24:38.723057 | orchestrator | Sunday 22 June 2025 20:24:37 +0000 (0:00:00.325) 0:00:03.114 *********** 2025-06-22 20:24:38.723067 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:24:38.723078 | orchestrator | 2025-06-22 20:24:38.723089 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-06-22 20:24:38.723100 | orchestrator | Sunday 22 June 2025 20:24:37 +0000 (0:00:00.141) 0:00:03.256 *********** 2025-06-22 20:24:38.723111 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:24:38.723122 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:24:38.723134 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:24:38.723146 | orchestrator | 2025-06-22 20:24:38.723158 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2025-06-22 20:24:38.723194 | orchestrator | Sunday 22 June 2025 20:24:37 +0000 (0:00:00.323) 0:00:03.580 *********** 2025-06-22 20:24:38.723207 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:24:38.723218 | orchestrator | 2025-06-22 20:24:38.723231 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-22 20:24:38.723243 | orchestrator | Sunday 22 June 2025 20:24:38 +0000 (0:00:00.536) 0:00:04.117 *********** 2025-06-22 20:24:38.723254 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:24:38.723266 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:24:38.723278 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:24:38.723290 | orchestrator | 2025-06-22 20:24:38.723302 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2025-06-22 20:24:38.723314 | orchestrator | Sunday 22 June 2025 20:24:38 +0000 (0:00:00.452) 0:00:04.569 *********** 2025-06-22 20:24:38.723346 | orchestrator | skipping: [testbed-node-3] => (item={'id': '2351be61bcddc4f5dd82d23c26b84ff9b10a6933b70d04eebcece1acd511e1d8', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 5 minutes (healthy)'})  2025-06-22 20:24:38.723362 | orchestrator | skipping: [testbed-node-3] => (item={'id': '000cf8fb723df6661258a440d603de0d6b264341d1a9cf9e15602f05496a015b', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-06-22 20:24:38.723380 | orchestrator | skipping: [testbed-node-3] => (item={'id': '222a101ccf7cb740af5837794d01d068150c601e9df00a26b904884a45c0814c', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2025-06-22 20:24:38.723395 | orchestrator | skipping: [testbed-node-3] => (item={'id': '2d5edaaeed24506bcf4b03f6f66079bedfd0373ee3cda1ab06f94a0bf351d36a', 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-06-22 20:24:38.723408 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'a8fd21f9df3de4f5b49ff931235acab80f87448ac8bc12aaa1fdaab9d010a061', 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-06-22 20:24:38.723439 | orchestrator | skipping: [testbed-node-3] => (item={'id': '592487ced8cabc9a8d52386ba2e913cd3f37359b27a17325aacfb20ea3ec73ff', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 11 minutes'})  2025-06-22 20:24:38.723463 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'e6a4a675624b42d1f5745ac4b86e33e68afe4b3c6737b8d2328e3dac824b9c1a', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 12 minutes'})  2025-06-22 20:24:38.723498 | orchestrator | skipping: [testbed-node-3] => (item={'id': '47db75627c1b9509aaa480376f9b5062727e50760f336fb55e54727f70442d2e', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 13 minutes'})  2025-06-22 20:24:38.723512 | orchestrator | skipping: [testbed-node-3] => (item={'id': '76a07eec0a7207275f1b2b24b84740bd48f4c7db4dfa7028e748de9705bfc044', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 14 minutes (healthy)'})  2025-06-22 20:24:38.723523 | orchestrator | skipping: [testbed-node-3] => (item={'id': '450d33e49077f36eeef296737616c2915cbc5aaa77f07a20bc046bd443aa7266', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up 22 minutes'})  2025-06-22 20:24:38.723536 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'a201515d51f4889bce176c82e7bed24af37c6dcddcdebe7ed69154b2e3347a43', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up 23 minutes'})  2025-06-22 20:24:38.723556 | orchestrator | skipping: [testbed-node-3] => (item={'id': '6b4c94968b49fc294b213f5ca395e23c5a57f3ca1bb7391b38a2ee7741c11f7b', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up 24 minutes'})  2025-06-22 20:24:38.723570 | orchestrator | ok: [testbed-node-3] => (item={'id': '82e6a0c755bbe9842816fd6b767fc16bc761b03e2590f9321991e9c52e7585d6', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up 25 minutes'}) 2025-06-22 20:24:38.723581 | orchestrator | ok: [testbed-node-3] => (item={'id': '7167428a89f83ee3c25bcc92a6626dd4cafd95d0cd273ac579ae8080d40a1dec', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up 25 minutes'}) 2025-06-22 20:24:38.723593 | orchestrator | skipping: [testbed-node-3] => (item={'id': '054ed182acb3e5f63c70e75c73b908d5dcb4c8f5d953385b53a2ada2f535d81b', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 28 minutes'})  2025-06-22 20:24:38.723604 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'f792c1c9d03469f226eda4fd78a51ec9044e7c2fb1645d562a73780c14147dc1', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-06-22 20:24:38.723615 | orchestrator | skipping: [testbed-node-3] => (item={'id': '31a91d24277e4dc0cc4cc3ca5cf005a39443de633b5cb6353cfbd1f53ad2a219', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2025-06-22 20:24:38.723632 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'b99dd16e37685b7d0da644b8b2c11db89fd67686c40407690e70e14268b1f44c', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'name': '/cron', 'state': 'running', 'status': 'Up 31 minutes'})  2025-06-22 20:24:38.723643 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'ba694b6d59009a1917ed22d6245c581511722731cf3e7c664d1982f8e02e0301', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 31 minutes'})  2025-06-22 20:24:38.723655 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'fd884796c5dfe860ca035e9d51a257d4762974bbcab798c9ab33f5b41a7b0aa2', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'name': '/fluentd', 'state': 'running', 'status': 'Up 32 minutes'})  2025-06-22 20:24:38.723673 | orchestrator | skipping: [testbed-node-4] => (item={'id': '3ebc8df400c52c49f380605ba6a31cf3d2934b7e7a6c5189c036f4a731695465', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 5 minutes (healthy)'})  2025-06-22 20:24:38.837893 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'd4723613ec2fd23bc7cef5b075e3de4f4481a8f3e411aa7e0dc8fb8a3489b411', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-06-22 20:24:38.838001 | orchestrator | skipping: [testbed-node-4] => (item={'id': '6135b2f3cefdc6252a45f3400d1b9dfcb5a79184669217e139c717cd9ce24643', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2025-06-22 20:24:38.838076 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'd39ffa2b4ea3375e94bbc5a924b5e14c216dd259688290c9a63870b439d33997', 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-06-22 20:24:38.838117 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'e846bf38cc83f04648aa7d024349ca94525378acb32b18013bd7e903c74d6285', 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-06-22 20:24:38.838129 | orchestrator | skipping: [testbed-node-4] => (item={'id': '7701c1c2e2838aadc555d367ed76d7d90c50143df394a1b835588d1f81cce10f', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 11 minutes'})  2025-06-22 20:24:38.838142 | orchestrator | skipping: [testbed-node-4] => (item={'id': '7392a9b9ccd253107889f58f4416a1448e4c22316690689115b7eb1a53f9f616', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 12 minutes'})  2025-06-22 20:24:38.838153 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'a6e846cdd4913b44ba9178fbd89e31e38acb72c169e90a5ebe5310a2bc0d77d2', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 12 minutes'})  2025-06-22 20:24:38.838166 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'a3e38ce1637542154e985dfec5d6ca69624d78c6eff4d7a7048e5a61e043f48f', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-06-22 20:24:38.838177 | orchestrator | skipping: [testbed-node-4] => (item={'id': '17d2a4e403197bab3af3563f47bfd57d7f6d22fb427a01431d625628bf744703', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up 22 minutes'})  2025-06-22 20:24:38.838190 | orchestrator | skipping: [testbed-node-4] => (item={'id': '66d9cf18c54d266191bc363687e64c5c5ba74616be9e240fd7541cb3ee6413b9', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up 23 minutes'})  2025-06-22 20:24:38.838201 | orchestrator | skipping: [testbed-node-4] => (item={'id': '61ce4259c4fb3c71cecd24d5d34daf7e7d959788d5e00e45906958ef2cb50a99', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up 24 minutes'})  2025-06-22 20:24:38.838228 | orchestrator | ok: [testbed-node-4] => (item={'id': 'f80682ea67bf52c2bd4a299d8a3a60cea4f4ab28c283d03a62c14ddfc0734a60', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up 25 minutes'}) 2025-06-22 20:24:38.838240 | orchestrator | ok: [testbed-node-4] => (item={'id': '2299d4fea9f64c60969718569f2698db257810dc78f7e68e2a80dc67e01b3568', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up 25 minutes'}) 2025-06-22 20:24:38.838252 | orchestrator | skipping: [testbed-node-4] => (item={'id': '1946c0f45560b4a9c0d02ee3a50cfe797e8dad52e0b127bd79561caa89aba8d6', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 28 minutes'})  2025-06-22 20:24:38.838290 | orchestrator | skipping: [testbed-node-4] => (item={'id': '5094fcdf962704089c8e220ff6c5ef3b4844afe1e61574e3793eee5f8fc5e6f7', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-06-22 20:24:38.838303 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'a34cd990a1d0beb9db55a656298b3e3acd92be2cc1310d4f6f8e4ddcb40389ee', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2025-06-22 20:24:38.838315 | orchestrator | skipping: [testbed-node-4] => (item={'id': '96811504b4bfdf071aebd5b488051b184a2784c05a75ee2be55c30d57dbf771e', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'name': '/cron', 'state': 'running', 'status': 'Up 31 minutes'})  2025-06-22 20:24:38.838337 | orchestrator | skipping: [testbed-node-4] => (item={'id': '96124b7976129391b4a460ac23f33620b44d75f2065e1042a826abd246633564', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 31 minutes'})  2025-06-22 20:24:38.838353 | orchestrator | skipping: [testbed-node-4] => (item={'id': '9f81ffef4e05c4ccc1f1c8167164927de1cef5b94784051ddbaf5899d8895615', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'name': '/fluentd', 'state': 'running', 'status': 'Up 32 minutes'})  2025-06-22 20:24:38.838365 | orchestrator | skipping: [testbed-node-5] => (item={'id': '3165148c813d83562398e170aa57cad74bb731e4e8509579b1df7075036abf24', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250530', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 5 minutes (healthy)'})  2025-06-22 20:24:38.838375 | orchestrator | skipping: [testbed-node-5] => (item={'id': '3d62bd1e523260abf386f8f6c36b8681fc7fdb22478fd665282020818724714e', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250530', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 6 minutes (healthy)'})  2025-06-22 20:24:38.838387 | orchestrator | skipping: [testbed-node-5] => (item={'id': '1a2fe1d8a387d7fe2f3163b13ec5f78f77e57adf0f2cdc0c1f6ff1ba98facf68', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250530', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 7 minutes (healthy)'})  2025-06-22 20:24:38.838398 | orchestrator | skipping: [testbed-node-5] => (item={'id': '867bee139da6d241f8d41e8fb607c6622b5ec8c1f95bf027ecc8081ea71ffc55', 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.1.1.20250530', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-06-22 20:24:38.838409 | orchestrator | skipping: [testbed-node-5] => (item={'id': '251f4ddf9b4b2b01c61acd7f3d9c18912cbdaab3c24f7b9b86e6b040daf5fca8', 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.1.1.20250530', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-06-22 20:24:38.838422 | orchestrator | skipping: [testbed-node-5] => (item={'id': '19b9a3ebee410415b63e00ed7a3216b9737e3ccb2cfd798ef4ba37119f45518f', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250530.0.20250530', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 11 minutes'})  2025-06-22 20:24:38.838435 | orchestrator | skipping: [testbed-node-5] => (item={'id': '92750f58837d82b79416c0a71d9ddc69f07f97b356733400f62edf12a3aa20b6', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250530', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 12 minutes'})  2025-06-22 20:24:38.838456 | orchestrator | skipping: [testbed-node-5] => (item={'id': '99953b010bd4e80f4968264793fab8ac1244a5de6e8623e86ddfc647221b1634', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250530', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 12 minutes'})  2025-06-22 20:24:38.838469 | orchestrator | skipping: [testbed-node-5] => (item={'id': '160089dc762959ff3c6581d91d3dc567e1482452627885301b8126be2b37200c', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.1.1.20250530', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 13 minutes (healthy)'})  2025-06-22 20:24:38.838516 | orchestrator | skipping: [testbed-node-5] => (item={'id': '858b76e986f8357cde798f5fafcf2f37883a7fe583497962b145fb103048be62', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up 22 minutes'})  2025-06-22 20:24:38.838539 | orchestrator | skipping: [testbed-node-5] => (item={'id': '43760bdc7cb1af82c5493b626f8812df7dc4e56a2071e94d1941e723c9341495', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up 23 minutes'})  2025-06-22 20:24:46.952473 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'a4b4331b4cadf379a9153e7a82a264fb0b450c505ebc74359bd02a92c240f8be', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up 24 minutes'})  2025-06-22 20:24:46.952654 | orchestrator | ok: [testbed-node-5] => (item={'id': '92c2c1e4b5febf1fa745c5cfdf37d76f4c71b59ac25e39e4919c6b4067038cb5', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up 25 minutes'}) 2025-06-22 20:24:46.952671 | orchestrator | ok: [testbed-node-5] => (item={'id': 'c81c540b491e0f19f701a91f31e19f0cd37d35f7602d1c6b5a625d0a82ba4b4c', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up 25 minutes'}) 2025-06-22 20:24:46.952683 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'e032d9d58491c16178dddad3ebf30b09aa1801862b896aaf58554b97c18e5beb', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250530', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 28 minutes'})  2025-06-22 20:24:46.952696 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'd7a380c6859c78da634770f16b7292eb5690cfc313456db6b173b5d3f63f8e80', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250530', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 29 minutes (healthy)'})  2025-06-22 20:24:46.952709 | orchestrator | skipping: [testbed-node-5] => (item={'id': '064e21ceca5dccb26c299c0a54cb29f85d2fea0563878f962bed7188215261bc', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250530', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2025-06-22 20:24:46.952720 | orchestrator | skipping: [testbed-node-5] => (item={'id': '83100fd3628d938523bdf526cff6f7cc933d979f8cca4af49570a80ddb298f26', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250530', 'name': '/cron', 'state': 'running', 'status': 'Up 31 minutes'})  2025-06-22 20:24:46.952731 | orchestrator | skipping: [testbed-node-5] => (item={'id': '0dd9189f625d94a7832e3c44831022e39e549632af1cdf1a13140242d4c905ba', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.4.1.20250530', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 31 minutes'})  2025-06-22 20:24:46.952743 | orchestrator | skipping: [testbed-node-5] => (item={'id': '533e5e2e4e92017a9d8931dcb11d19cf4fcc7ce2d251e5412e854a1a86b78ce7', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250530', 'name': '/fluentd', 'state': 'running', 'status': 'Up 32 minutes'})  2025-06-22 20:24:46.952754 | orchestrator | 2025-06-22 20:24:46.952767 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2025-06-22 20:24:46.952779 | orchestrator | Sunday 22 June 2025 20:24:38 +0000 (0:00:00.464) 0:00:05.034 *********** 2025-06-22 20:24:46.952790 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:24:46.952801 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:24:46.952812 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:24:46.952822 | orchestrator | 2025-06-22 20:24:46.952833 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2025-06-22 20:24:46.952844 | orchestrator | Sunday 22 June 2025 20:24:39 +0000 (0:00:00.298) 0:00:05.333 *********** 2025-06-22 20:24:46.952855 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:24:46.952866 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:24:46.952877 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:24:46.952887 | orchestrator | 2025-06-22 20:24:46.952915 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2025-06-22 20:24:46.952926 | orchestrator | Sunday 22 June 2025 20:24:39 +0000 (0:00:00.441) 0:00:05.774 *********** 2025-06-22 20:24:46.952937 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:24:46.952947 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:24:46.952958 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:24:46.952992 | orchestrator | 2025-06-22 20:24:46.953003 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-22 20:24:46.953016 | orchestrator | Sunday 22 June 2025 20:24:39 +0000 (0:00:00.306) 0:00:06.081 *********** 2025-06-22 20:24:46.953029 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:24:46.953041 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:24:46.953053 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:24:46.953065 | orchestrator | 2025-06-22 20:24:46.953077 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2025-06-22 20:24:46.953089 | orchestrator | Sunday 22 June 2025 20:24:40 +0000 (0:00:00.282) 0:00:06.363 *********** 2025-06-22 20:24:46.953102 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2025-06-22 20:24:46.953115 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2025-06-22 20:24:46.953127 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:24:46.953139 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2025-06-22 20:24:46.953151 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2025-06-22 20:24:46.953182 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:24:46.953195 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2025-06-22 20:24:46.953207 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2025-06-22 20:24:46.953219 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:24:46.953231 | orchestrator | 2025-06-22 20:24:46.953243 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2025-06-22 20:24:46.953255 | orchestrator | Sunday 22 June 2025 20:24:40 +0000 (0:00:00.323) 0:00:06.687 *********** 2025-06-22 20:24:46.953267 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:24:46.953279 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:24:46.953291 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:24:46.953303 | orchestrator | 2025-06-22 20:24:46.953315 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-06-22 20:24:46.953327 | orchestrator | Sunday 22 June 2025 20:24:41 +0000 (0:00:00.454) 0:00:07.141 *********** 2025-06-22 20:24:46.953339 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:24:46.953351 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:24:46.953363 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:24:46.953374 | orchestrator | 2025-06-22 20:24:46.953385 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-06-22 20:24:46.953395 | orchestrator | Sunday 22 June 2025 20:24:41 +0000 (0:00:00.314) 0:00:07.455 *********** 2025-06-22 20:24:46.953406 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:24:46.953417 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:24:46.953428 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:24:46.953438 | orchestrator | 2025-06-22 20:24:46.953449 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2025-06-22 20:24:46.953459 | orchestrator | Sunday 22 June 2025 20:24:41 +0000 (0:00:00.314) 0:00:07.770 *********** 2025-06-22 20:24:46.953470 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:24:46.953499 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:24:46.953510 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:24:46.953520 | orchestrator | 2025-06-22 20:24:46.953531 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-22 20:24:46.953542 | orchestrator | Sunday 22 June 2025 20:24:41 +0000 (0:00:00.313) 0:00:08.083 *********** 2025-06-22 20:24:46.953552 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:24:46.953563 | orchestrator | 2025-06-22 20:24:46.953573 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-22 20:24:46.953584 | orchestrator | Sunday 22 June 2025 20:24:42 +0000 (0:00:00.647) 0:00:08.730 *********** 2025-06-22 20:24:46.953594 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:24:46.953612 | orchestrator | 2025-06-22 20:24:46.953623 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-22 20:24:46.953634 | orchestrator | Sunday 22 June 2025 20:24:42 +0000 (0:00:00.235) 0:00:08.966 *********** 2025-06-22 20:24:46.953644 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:24:46.953655 | orchestrator | 2025-06-22 20:24:46.953665 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-22 20:24:46.953676 | orchestrator | Sunday 22 June 2025 20:24:43 +0000 (0:00:00.233) 0:00:09.200 *********** 2025-06-22 20:24:46.953687 | orchestrator | 2025-06-22 20:24:46.953697 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-22 20:24:46.953708 | orchestrator | Sunday 22 June 2025 20:24:43 +0000 (0:00:00.078) 0:00:09.279 *********** 2025-06-22 20:24:46.953719 | orchestrator | 2025-06-22 20:24:46.953729 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-22 20:24:46.953740 | orchestrator | Sunday 22 June 2025 20:24:43 +0000 (0:00:00.066) 0:00:09.345 *********** 2025-06-22 20:24:46.953750 | orchestrator | 2025-06-22 20:24:46.953761 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-22 20:24:46.953772 | orchestrator | Sunday 22 June 2025 20:24:43 +0000 (0:00:00.069) 0:00:09.414 *********** 2025-06-22 20:24:46.953783 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:24:46.953793 | orchestrator | 2025-06-22 20:24:46.953804 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2025-06-22 20:24:46.953815 | orchestrator | Sunday 22 June 2025 20:24:43 +0000 (0:00:00.254) 0:00:09.669 *********** 2025-06-22 20:24:46.953825 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:24:46.953836 | orchestrator | 2025-06-22 20:24:46.953846 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-22 20:24:46.953857 | orchestrator | Sunday 22 June 2025 20:24:43 +0000 (0:00:00.258) 0:00:09.928 *********** 2025-06-22 20:24:46.953873 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:24:46.953884 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:24:46.953895 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:24:46.953905 | orchestrator | 2025-06-22 20:24:46.953916 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2025-06-22 20:24:46.953926 | orchestrator | Sunday 22 June 2025 20:24:44 +0000 (0:00:00.273) 0:00:10.202 *********** 2025-06-22 20:24:46.953937 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:24:46.953947 | orchestrator | 2025-06-22 20:24:46.953958 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2025-06-22 20:24:46.953969 | orchestrator | Sunday 22 June 2025 20:24:44 +0000 (0:00:00.654) 0:00:10.856 *********** 2025-06-22 20:24:46.953979 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-06-22 20:24:46.953990 | orchestrator | 2025-06-22 20:24:46.954000 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2025-06-22 20:24:46.954011 | orchestrator | Sunday 22 June 2025 20:24:46 +0000 (0:00:01.616) 0:00:12.473 *********** 2025-06-22 20:24:46.954075 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:24:46.954086 | orchestrator | 2025-06-22 20:24:46.954097 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2025-06-22 20:24:46.954107 | orchestrator | Sunday 22 June 2025 20:24:46 +0000 (0:00:00.137) 0:00:12.610 *********** 2025-06-22 20:24:46.954118 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:24:46.954129 | orchestrator | 2025-06-22 20:24:46.954139 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2025-06-22 20:24:46.954150 | orchestrator | Sunday 22 June 2025 20:24:46 +0000 (0:00:00.291) 0:00:12.901 *********** 2025-06-22 20:24:46.954168 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:24:58.570586 | orchestrator | 2025-06-22 20:24:58.571456 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2025-06-22 20:24:58.571504 | orchestrator | Sunday 22 June 2025 20:24:46 +0000 (0:00:00.132) 0:00:13.033 *********** 2025-06-22 20:24:58.571515 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:24:58.571526 | orchestrator | 2025-06-22 20:24:58.571557 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-22 20:24:58.571567 | orchestrator | Sunday 22 June 2025 20:24:47 +0000 (0:00:00.114) 0:00:13.148 *********** 2025-06-22 20:24:58.571576 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:24:58.571586 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:24:58.571595 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:24:58.571605 | orchestrator | 2025-06-22 20:24:58.571614 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2025-06-22 20:24:58.571624 | orchestrator | Sunday 22 June 2025 20:24:47 +0000 (0:00:00.300) 0:00:13.448 *********** 2025-06-22 20:24:58.571634 | orchestrator | changed: [testbed-node-3] 2025-06-22 20:24:58.571645 | orchestrator | changed: [testbed-node-4] 2025-06-22 20:24:58.571654 | orchestrator | changed: [testbed-node-5] 2025-06-22 20:24:58.571663 | orchestrator | 2025-06-22 20:24:58.571673 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2025-06-22 20:24:58.571682 | orchestrator | Sunday 22 June 2025 20:24:49 +0000 (0:00:02.605) 0:00:16.054 *********** 2025-06-22 20:24:58.571692 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:24:58.571702 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:24:58.571711 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:24:58.571721 | orchestrator | 2025-06-22 20:24:58.571730 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2025-06-22 20:24:58.571740 | orchestrator | Sunday 22 June 2025 20:24:50 +0000 (0:00:00.269) 0:00:16.324 *********** 2025-06-22 20:24:58.571749 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:24:58.571758 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:24:58.571768 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:24:58.571777 | orchestrator | 2025-06-22 20:24:58.571787 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2025-06-22 20:24:58.571796 | orchestrator | Sunday 22 June 2025 20:24:50 +0000 (0:00:00.438) 0:00:16.762 *********** 2025-06-22 20:24:58.571806 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:24:58.571815 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:24:58.571825 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:24:58.571834 | orchestrator | 2025-06-22 20:24:58.571843 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2025-06-22 20:24:58.571853 | orchestrator | Sunday 22 June 2025 20:24:50 +0000 (0:00:00.273) 0:00:17.035 *********** 2025-06-22 20:24:58.571863 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:24:58.571872 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:24:58.571881 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:24:58.571891 | orchestrator | 2025-06-22 20:24:58.571900 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2025-06-22 20:24:58.571910 | orchestrator | Sunday 22 June 2025 20:24:51 +0000 (0:00:00.397) 0:00:17.433 *********** 2025-06-22 20:24:58.571919 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:24:58.571929 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:24:58.571938 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:24:58.571948 | orchestrator | 2025-06-22 20:24:58.571957 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2025-06-22 20:24:58.571967 | orchestrator | Sunday 22 June 2025 20:24:51 +0000 (0:00:00.253) 0:00:17.687 *********** 2025-06-22 20:24:58.571976 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:24:58.571986 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:24:58.571995 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:24:58.572006 | orchestrator | 2025-06-22 20:24:58.572015 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-06-22 20:24:58.572025 | orchestrator | Sunday 22 June 2025 20:24:51 +0000 (0:00:00.254) 0:00:17.941 *********** 2025-06-22 20:24:58.572034 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:24:58.572044 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:24:58.572053 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:24:58.572063 | orchestrator | 2025-06-22 20:24:58.572072 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2025-06-22 20:24:58.572089 | orchestrator | Sunday 22 June 2025 20:24:52 +0000 (0:00:00.452) 0:00:18.394 *********** 2025-06-22 20:24:58.572098 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:24:58.572108 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:24:58.572117 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:24:58.572126 | orchestrator | 2025-06-22 20:24:58.572136 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2025-06-22 20:24:58.572146 | orchestrator | Sunday 22 June 2025 20:24:52 +0000 (0:00:00.563) 0:00:18.957 *********** 2025-06-22 20:24:58.572155 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:24:58.572164 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:24:58.572174 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:24:58.572183 | orchestrator | 2025-06-22 20:24:58.572192 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2025-06-22 20:24:58.572202 | orchestrator | Sunday 22 June 2025 20:24:53 +0000 (0:00:00.287) 0:00:19.245 *********** 2025-06-22 20:24:58.572211 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:24:58.572221 | orchestrator | skipping: [testbed-node-4] 2025-06-22 20:24:58.572230 | orchestrator | skipping: [testbed-node-5] 2025-06-22 20:24:58.572243 | orchestrator | 2025-06-22 20:24:58.572260 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2025-06-22 20:24:58.572279 | orchestrator | Sunday 22 June 2025 20:24:53 +0000 (0:00:00.260) 0:00:19.506 *********** 2025-06-22 20:24:58.572295 | orchestrator | ok: [testbed-node-3] 2025-06-22 20:24:58.572314 | orchestrator | ok: [testbed-node-4] 2025-06-22 20:24:58.572329 | orchestrator | ok: [testbed-node-5] 2025-06-22 20:24:58.572345 | orchestrator | 2025-06-22 20:24:58.572361 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-06-22 20:24:58.572378 | orchestrator | Sunday 22 June 2025 20:24:53 +0000 (0:00:00.286) 0:00:19.792 *********** 2025-06-22 20:24:58.572394 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-22 20:24:58.572410 | orchestrator | 2025-06-22 20:24:58.572425 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-06-22 20:24:58.572440 | orchestrator | Sunday 22 June 2025 20:24:54 +0000 (0:00:00.503) 0:00:20.295 *********** 2025-06-22 20:24:58.572456 | orchestrator | skipping: [testbed-node-3] 2025-06-22 20:24:58.572471 | orchestrator | 2025-06-22 20:24:58.572532 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-06-22 20:24:58.572550 | orchestrator | Sunday 22 June 2025 20:24:54 +0000 (0:00:00.232) 0:00:20.528 *********** 2025-06-22 20:24:58.572566 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-22 20:24:58.572582 | orchestrator | 2025-06-22 20:24:58.572598 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-06-22 20:24:58.572615 | orchestrator | Sunday 22 June 2025 20:24:55 +0000 (0:00:01.465) 0:00:21.993 *********** 2025-06-22 20:24:58.572675 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-22 20:24:58.572687 | orchestrator | 2025-06-22 20:24:58.572696 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-06-22 20:24:58.572706 | orchestrator | Sunday 22 June 2025 20:24:56 +0000 (0:00:00.278) 0:00:22.271 *********** 2025-06-22 20:24:58.572715 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-22 20:24:58.572724 | orchestrator | 2025-06-22 20:24:58.572734 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-22 20:24:58.572743 | orchestrator | Sunday 22 June 2025 20:24:56 +0000 (0:00:00.245) 0:00:22.517 *********** 2025-06-22 20:24:58.572753 | orchestrator | 2025-06-22 20:24:58.572762 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-22 20:24:58.572771 | orchestrator | Sunday 22 June 2025 20:24:56 +0000 (0:00:00.071) 0:00:22.588 *********** 2025-06-22 20:24:58.572781 | orchestrator | 2025-06-22 20:24:58.572790 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-06-22 20:24:58.572800 | orchestrator | Sunday 22 June 2025 20:24:56 +0000 (0:00:00.067) 0:00:22.656 *********** 2025-06-22 20:24:58.572838 | orchestrator | 2025-06-22 20:24:58.572849 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-06-22 20:24:58.572868 | orchestrator | Sunday 22 June 2025 20:24:56 +0000 (0:00:00.069) 0:00:22.725 *********** 2025-06-22 20:24:58.572923 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-06-22 20:24:58.572934 | orchestrator | 2025-06-22 20:24:58.572944 | orchestrator | TASK [Print report file information] ******************************************* 2025-06-22 20:24:58.572953 | orchestrator | Sunday 22 June 2025 20:24:57 +0000 (0:00:01.347) 0:00:24.073 *********** 2025-06-22 20:24:58.572963 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2025-06-22 20:24:58.572972 | orchestrator |  "msg": [ 2025-06-22 20:24:58.572982 | orchestrator |  "Validator run completed.", 2025-06-22 20:24:58.572992 | orchestrator |  "You can find the report file here:", 2025-06-22 20:24:58.573002 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2025-06-22T20:24:34+00:00-report.json", 2025-06-22 20:24:58.573012 | orchestrator |  "on the following host:", 2025-06-22 20:24:58.573021 | orchestrator |  "testbed-manager" 2025-06-22 20:24:58.573031 | orchestrator |  ] 2025-06-22 20:24:58.573041 | orchestrator | } 2025-06-22 20:24:58.573050 | orchestrator | 2025-06-22 20:24:58.573060 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 20:24:58.573070 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2025-06-22 20:24:58.573081 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-06-22 20:24:58.573091 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-06-22 20:24:58.573101 | orchestrator | 2025-06-22 20:24:58.573110 | orchestrator | 2025-06-22 20:24:58.573120 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 20:24:58.573129 | orchestrator | Sunday 22 June 2025 20:24:58 +0000 (0:00:00.552) 0:00:24.625 *********** 2025-06-22 20:24:58.573139 | orchestrator | =============================================================================== 2025-06-22 20:24:58.573149 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 2.61s 2025-06-22 20:24:58.573159 | orchestrator | Get ceph osd tree ------------------------------------------------------- 1.62s 2025-06-22 20:24:58.573173 | orchestrator | Aggregate test results step one ----------------------------------------- 1.47s 2025-06-22 20:24:58.573183 | orchestrator | Write report file ------------------------------------------------------- 1.35s 2025-06-22 20:24:58.573192 | orchestrator | Create report output directory ------------------------------------------ 0.95s 2025-06-22 20:24:58.573202 | orchestrator | Set _mon_hostname fact -------------------------------------------------- 0.65s 2025-06-22 20:24:58.573211 | orchestrator | Get timestamp for report file ------------------------------------------- 0.65s 2025-06-22 20:24:58.573221 | orchestrator | Aggregate test results step one ----------------------------------------- 0.65s 2025-06-22 20:24:58.573230 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.56s 2025-06-22 20:24:58.573239 | orchestrator | Print report file information ------------------------------------------- 0.55s 2025-06-22 20:24:58.573249 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.54s 2025-06-22 20:24:58.573259 | orchestrator | Set validation result to passed if no test failed ----------------------- 0.50s 2025-06-22 20:24:58.573268 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.48s 2025-06-22 20:24:58.573278 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.46s 2025-06-22 20:24:58.573287 | orchestrator | Get count of ceph-osd containers that are not running ------------------- 0.45s 2025-06-22 20:24:58.573297 | orchestrator | Prepare test data ------------------------------------------------------- 0.45s 2025-06-22 20:24:58.573323 | orchestrator | Prepare test data ------------------------------------------------------- 0.45s 2025-06-22 20:24:58.827354 | orchestrator | Set test result to failed when count of containers is wrong ------------- 0.44s 2025-06-22 20:24:58.827450 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.44s 2025-06-22 20:24:58.827463 | orchestrator | Pass if count of encrypted OSDs equals count of OSDs -------------------- 0.40s 2025-06-22 20:24:59.064555 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2025-06-22 20:24:59.073128 | orchestrator | + set -e 2025-06-22 20:24:59.073180 | orchestrator | + source /opt/manager-vars.sh 2025-06-22 20:24:59.073193 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-06-22 20:24:59.073204 | orchestrator | ++ NUMBER_OF_NODES=6 2025-06-22 20:24:59.073215 | orchestrator | ++ export CEPH_VERSION=reef 2025-06-22 20:24:59.073225 | orchestrator | ++ CEPH_VERSION=reef 2025-06-22 20:24:59.073236 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-06-22 20:24:59.073248 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-06-22 20:24:59.073260 | orchestrator | ++ export MANAGER_VERSION=9.1.0 2025-06-22 20:24:59.073290 | orchestrator | ++ MANAGER_VERSION=9.1.0 2025-06-22 20:24:59.073301 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-06-22 20:24:59.073321 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-06-22 20:24:59.073471 | orchestrator | ++ export ARA=false 2025-06-22 20:24:59.073530 | orchestrator | ++ ARA=false 2025-06-22 20:24:59.073542 | orchestrator | ++ export DEPLOY_MODE=manager 2025-06-22 20:24:59.073552 | orchestrator | ++ DEPLOY_MODE=manager 2025-06-22 20:24:59.073563 | orchestrator | ++ export TEMPEST=false 2025-06-22 20:24:59.073573 | orchestrator | ++ TEMPEST=false 2025-06-22 20:24:59.073584 | orchestrator | ++ export IS_ZUUL=true 2025-06-22 20:24:59.073594 | orchestrator | ++ IS_ZUUL=true 2025-06-22 20:24:59.073605 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.14 2025-06-22 20:24:59.073616 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.14 2025-06-22 20:24:59.073627 | orchestrator | ++ export EXTERNAL_API=false 2025-06-22 20:24:59.073637 | orchestrator | ++ EXTERNAL_API=false 2025-06-22 20:24:59.073648 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-06-22 20:24:59.073658 | orchestrator | ++ IMAGE_USER=ubuntu 2025-06-22 20:24:59.073669 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-06-22 20:24:59.073679 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-06-22 20:24:59.073690 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-06-22 20:24:59.073700 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-06-22 20:24:59.073721 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-06-22 20:24:59.073732 | orchestrator | + source /etc/os-release 2025-06-22 20:24:59.073742 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.2 LTS' 2025-06-22 20:24:59.073753 | orchestrator | ++ NAME=Ubuntu 2025-06-22 20:24:59.073763 | orchestrator | ++ VERSION_ID=24.04 2025-06-22 20:24:59.073774 | orchestrator | ++ VERSION='24.04.2 LTS (Noble Numbat)' 2025-06-22 20:24:59.073784 | orchestrator | ++ VERSION_CODENAME=noble 2025-06-22 20:24:59.073795 | orchestrator | ++ ID=ubuntu 2025-06-22 20:24:59.073805 | orchestrator | ++ ID_LIKE=debian 2025-06-22 20:24:59.073816 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2025-06-22 20:24:59.073826 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2025-06-22 20:24:59.073837 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2025-06-22 20:24:59.073848 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2025-06-22 20:24:59.073859 | orchestrator | ++ UBUNTU_CODENAME=noble 2025-06-22 20:24:59.073870 | orchestrator | ++ LOGO=ubuntu-logo 2025-06-22 20:24:59.073880 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2025-06-22 20:24:59.073892 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2025-06-22 20:24:59.073904 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-06-22 20:24:59.099927 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-06-22 20:25:21.770248 | orchestrator | 2025-06-22 20:25:21.770355 | orchestrator | # Status of Elasticsearch 2025-06-22 20:25:21.770395 | orchestrator | 2025-06-22 20:25:21.770407 | orchestrator | + pushd /opt/configuration/contrib 2025-06-22 20:25:21.770420 | orchestrator | + echo 2025-06-22 20:25:21.770431 | orchestrator | + echo '# Status of Elasticsearch' 2025-06-22 20:25:21.770442 | orchestrator | + echo 2025-06-22 20:25:21.770453 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2025-06-22 20:25:21.947191 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2025-06-22 20:25:21.947309 | orchestrator | 2025-06-22 20:25:21.947325 | orchestrator | # Status of MariaDB 2025-06-22 20:25:21.947338 | orchestrator | 2025-06-22 20:25:21.947349 | orchestrator | + echo 2025-06-22 20:25:21.947361 | orchestrator | + echo '# Status of MariaDB' 2025-06-22 20:25:21.947372 | orchestrator | + echo 2025-06-22 20:25:21.947383 | orchestrator | + MARIADB_USER=root_shard_0 2025-06-22 20:25:21.947395 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root_shard_0 -p password -H api-int.testbed.osism.xyz -c 1 2025-06-22 20:25:21.999181 | orchestrator | Reading package lists... 2025-06-22 20:25:22.308207 | orchestrator | Building dependency tree... 2025-06-22 20:25:22.309579 | orchestrator | Reading state information... 2025-06-22 20:25:22.659860 | orchestrator | bc is already the newest version (1.07.1-3ubuntu4). 2025-06-22 20:25:22.659961 | orchestrator | bc set to manually installed. 2025-06-22 20:25:22.659976 | orchestrator | 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 2025-06-22 20:25:23.359021 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2025-06-22 20:25:23.359987 | orchestrator | 2025-06-22 20:25:23.360152 | orchestrator | + echo 2025-06-22 20:25:23.360177 | orchestrator | + echo '# Status of Prometheus' 2025-06-22 20:25:23.360190 | orchestrator | # Status of Prometheus 2025-06-22 20:25:23.360480 | orchestrator | + echo 2025-06-22 20:25:23.360535 | orchestrator | 2025-06-22 20:25:23.360673 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2025-06-22 20:25:23.413653 | orchestrator | Unauthorized 2025-06-22 20:25:23.416677 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2025-06-22 20:25:23.482943 | orchestrator | Unauthorized 2025-06-22 20:25:23.486118 | orchestrator | 2025-06-22 20:25:23.486149 | orchestrator | + echo 2025-06-22 20:25:23.486161 | orchestrator | + echo '# Status of RabbitMQ' 2025-06-22 20:25:23.487163 | orchestrator | # Status of RabbitMQ 2025-06-22 20:25:23.487184 | orchestrator | 2025-06-22 20:25:23.487196 | orchestrator | + echo 2025-06-22 20:25:23.487208 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2025-06-22 20:25:23.922272 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2025-06-22 20:25:23.931134 | orchestrator | 2025-06-22 20:25:23.931202 | orchestrator | # Status of Redis 2025-06-22 20:25:23.931216 | orchestrator | 2025-06-22 20:25:23.931227 | orchestrator | + echo 2025-06-22 20:25:23.931238 | orchestrator | + echo '# Status of Redis' 2025-06-22 20:25:23.931249 | orchestrator | + echo 2025-06-22 20:25:23.931262 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2025-06-22 20:25:23.935900 | orchestrator | TCP OK - 0.002 second response time on 192.168.16.10 port 6379|time=0.001974s;;;0.000000;10.000000 2025-06-22 20:25:23.936748 | orchestrator | 2025-06-22 20:25:23.936768 | orchestrator | # Create backup of MariaDB database 2025-06-22 20:25:23.936781 | orchestrator | 2025-06-22 20:25:23.936792 | orchestrator | + popd 2025-06-22 20:25:23.936802 | orchestrator | + echo 2025-06-22 20:25:23.936813 | orchestrator | + echo '# Create backup of MariaDB database' 2025-06-22 20:25:23.936824 | orchestrator | + echo 2025-06-22 20:25:23.936835 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2025-06-22 20:25:25.738284 | orchestrator | 2025-06-22 20:25:25 | INFO  | Task 1e836fda-1115-4448-b2fa-d277d403a018 (mariadb_backup) was prepared for execution. 2025-06-22 20:25:25.738383 | orchestrator | 2025-06-22 20:25:25 | INFO  | It takes a moment until task 1e836fda-1115-4448-b2fa-d277d403a018 (mariadb_backup) has been started and output is visible here. 2025-06-22 20:25:29.544949 | orchestrator | 2025-06-22 20:25:29.545850 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-06-22 20:25:29.545885 | orchestrator | 2025-06-22 20:25:29.546445 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-06-22 20:25:29.547250 | orchestrator | Sunday 22 June 2025 20:25:29 +0000 (0:00:00.174) 0:00:00.174 *********** 2025-06-22 20:25:29.730846 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:25:29.853723 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:25:29.853871 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:25:29.854475 | orchestrator | 2025-06-22 20:25:29.854961 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-06-22 20:25:29.855724 | orchestrator | Sunday 22 June 2025 20:25:29 +0000 (0:00:00.310) 0:00:00.484 *********** 2025-06-22 20:25:30.418338 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-06-22 20:25:30.420991 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-06-22 20:25:30.421065 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-06-22 20:25:30.421075 | orchestrator | 2025-06-22 20:25:30.422088 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-06-22 20:25:30.422464 | orchestrator | 2025-06-22 20:25:30.423274 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-06-22 20:25:30.423884 | orchestrator | Sunday 22 June 2025 20:25:30 +0000 (0:00:00.566) 0:00:01.051 *********** 2025-06-22 20:25:30.813744 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-06-22 20:25:30.813843 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-06-22 20:25:30.813858 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-06-22 20:25:30.813880 | orchestrator | 2025-06-22 20:25:30.814421 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-06-22 20:25:30.815648 | orchestrator | Sunday 22 June 2025 20:25:30 +0000 (0:00:00.388) 0:00:01.439 *********** 2025-06-22 20:25:31.353537 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-06-22 20:25:31.354182 | orchestrator | 2025-06-22 20:25:31.357791 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2025-06-22 20:25:31.357823 | orchestrator | Sunday 22 June 2025 20:25:31 +0000 (0:00:00.545) 0:00:01.985 *********** 2025-06-22 20:25:34.156926 | orchestrator | ok: [testbed-node-1] 2025-06-22 20:25:34.165482 | orchestrator | ok: [testbed-node-0] 2025-06-22 20:25:34.165655 | orchestrator | ok: [testbed-node-2] 2025-06-22 20:25:34.170776 | orchestrator | 2025-06-22 20:25:34.170830 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2025-06-22 20:25:34.171459 | orchestrator | Sunday 22 June 2025 20:25:34 +0000 (0:00:02.798) 0:00:04.783 *********** 2025-06-22 20:25:51.517348 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-06-22 20:25:51.517455 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2025-06-22 20:25:51.517473 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-06-22 20:25:51.517487 | orchestrator | mariadb_bootstrap_restart 2025-06-22 20:25:51.588858 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:25:51.593767 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:25:51.593818 | orchestrator | changed: [testbed-node-0] 2025-06-22 20:25:51.594430 | orchestrator | 2025-06-22 20:25:51.594866 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-06-22 20:25:51.595820 | orchestrator | skipping: no hosts matched 2025-06-22 20:25:51.596447 | orchestrator | 2025-06-22 20:25:51.596862 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-06-22 20:25:51.597604 | orchestrator | skipping: no hosts matched 2025-06-22 20:25:51.598104 | orchestrator | 2025-06-22 20:25:51.598869 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-06-22 20:25:51.599720 | orchestrator | skipping: no hosts matched 2025-06-22 20:25:51.599964 | orchestrator | 2025-06-22 20:25:51.602478 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-06-22 20:25:51.603280 | orchestrator | 2025-06-22 20:25:51.604826 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-06-22 20:25:51.605829 | orchestrator | Sunday 22 June 2025 20:25:51 +0000 (0:00:17.438) 0:00:22.221 *********** 2025-06-22 20:25:51.767335 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:25:51.880554 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:25:51.881285 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:25:51.884623 | orchestrator | 2025-06-22 20:25:51.884884 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-06-22 20:25:51.884914 | orchestrator | Sunday 22 June 2025 20:25:51 +0000 (0:00:00.291) 0:00:22.513 *********** 2025-06-22 20:25:52.246567 | orchestrator | skipping: [testbed-node-0] 2025-06-22 20:25:52.290898 | orchestrator | skipping: [testbed-node-1] 2025-06-22 20:25:52.292484 | orchestrator | skipping: [testbed-node-2] 2025-06-22 20:25:52.293701 | orchestrator | 2025-06-22 20:25:52.294650 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 20:25:52.294788 | orchestrator | 2025-06-22 20:25:52 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-22 20:25:52.295227 | orchestrator | 2025-06-22 20:25:52 | INFO  | Please wait and do not abort execution. 2025-06-22 20:25:52.296336 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-06-22 20:25:52.297401 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-22 20:25:52.298273 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-06-22 20:25:52.298706 | orchestrator | 2025-06-22 20:25:52.299372 | orchestrator | 2025-06-22 20:25:52.299738 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 20:25:52.300204 | orchestrator | Sunday 22 June 2025 20:25:52 +0000 (0:00:00.410) 0:00:22.923 *********** 2025-06-22 20:25:52.300744 | orchestrator | =============================================================================== 2025-06-22 20:25:52.302421 | orchestrator | mariadb : Taking full database backup via Mariabackup ------------------ 17.44s 2025-06-22 20:25:52.303050 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 2.80s 2025-06-22 20:25:52.303539 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.57s 2025-06-22 20:25:52.303988 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.55s 2025-06-22 20:25:52.304706 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.41s 2025-06-22 20:25:52.304930 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.39s 2025-06-22 20:25:52.306368 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.31s 2025-06-22 20:25:52.307817 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.29s 2025-06-22 20:25:52.821713 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2025-06-22 20:25:52.829103 | orchestrator | + set -e 2025-06-22 20:25:52.829168 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-06-22 20:25:52.829182 | orchestrator | ++ export INTERACTIVE=false 2025-06-22 20:25:52.829195 | orchestrator | ++ INTERACTIVE=false 2025-06-22 20:25:52.829206 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-06-22 20:25:52.829217 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-06-22 20:25:52.829228 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-06-22 20:25:52.829982 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-06-22 20:25:52.834202 | orchestrator | 2025-06-22 20:25:52.834233 | orchestrator | # OpenStack endpoints 2025-06-22 20:25:52.834250 | orchestrator | 2025-06-22 20:25:52.834262 | orchestrator | ++ export MANAGER_VERSION=9.1.0 2025-06-22 20:25:52.834274 | orchestrator | ++ MANAGER_VERSION=9.1.0 2025-06-22 20:25:52.834285 | orchestrator | + export OS_CLOUD=admin 2025-06-22 20:25:52.834296 | orchestrator | + OS_CLOUD=admin 2025-06-22 20:25:52.834307 | orchestrator | + echo 2025-06-22 20:25:52.834318 | orchestrator | + echo '# OpenStack endpoints' 2025-06-22 20:25:52.834329 | orchestrator | + echo 2025-06-22 20:25:52.834340 | orchestrator | + openstack endpoint list 2025-06-22 20:25:56.324153 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-06-22 20:25:56.324291 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2025-06-22 20:25:56.324357 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-06-22 20:25:56.324378 | orchestrator | | 047029f4ff30410b89fcdf2bacd3f211 | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2025-06-22 20:25:56.324396 | orchestrator | | 07b4f861d206471ab86338bdaabdda53 | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-06-22 20:25:56.324415 | orchestrator | | 080dd40f6f4d485eb75254995a7abb30 | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2025-06-22 20:25:56.324434 | orchestrator | | 33cfc9fc7d394fc8970876d5d1a71515 | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2025-06-22 20:25:56.324452 | orchestrator | | 37ae0b5840db4d089b80cb3599abe040 | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2025-06-22 20:25:56.324471 | orchestrator | | 52bc44679fe54d11ab77fc48a1c5a891 | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2025-06-22 20:25:56.324489 | orchestrator | | 61ba251711f84f1d930a9b9df87528d6 | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2025-06-22 20:25:56.324570 | orchestrator | | 68ef888e234041ada7e14bf3f9937497 | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-06-22 20:25:56.324589 | orchestrator | | 6dfba5438d1646d9b6c632d34f93c1f3 | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2025-06-22 20:25:56.324607 | orchestrator | | 7d431bbf698147d1b031aa5e60e8284c | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2025-06-22 20:25:56.324624 | orchestrator | | 841d15d6045b4daf8b60afabe4fa609c | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2025-06-22 20:25:56.324640 | orchestrator | | 8583134d9ed44d64b5bb13d081cb1027 | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2025-06-22 20:25:56.324657 | orchestrator | | 8a06cb642cdc47d4a2343a0708be8f1c | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-06-22 20:25:56.324676 | orchestrator | | 8b9c02c54fd9409ab7793c64084d7708 | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2025-06-22 20:25:56.324694 | orchestrator | | 94f04f4e15024f45b7c9b07c06347637 | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-06-22 20:25:56.324712 | orchestrator | | 97ca8f1629fd4798b754272feaf20763 | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2025-06-22 20:25:56.324731 | orchestrator | | 994957b12f594ae8b3d841012a48aa27 | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2025-06-22 20:25:56.324749 | orchestrator | | acae3fafb634404b9454552c69075ce1 | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2025-06-22 20:25:56.324768 | orchestrator | | b302b949bfd54853bdf187230f96f598 | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2025-06-22 20:25:56.324824 | orchestrator | | bf41c0424c074f929562103ac52c38ec | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2025-06-22 20:25:56.324872 | orchestrator | | f2196e1515624a218fc9f74eb5392e87 | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2025-06-22 20:25:56.324892 | orchestrator | | fdefc036481c4a86b9c7606c5ac81ef0 | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2025-06-22 20:25:56.324910 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-06-22 20:25:56.551805 | orchestrator | 2025-06-22 20:25:56.551900 | orchestrator | # Cinder 2025-06-22 20:25:56.551913 | orchestrator | 2025-06-22 20:25:56.551924 | orchestrator | + echo 2025-06-22 20:25:56.551934 | orchestrator | + echo '# Cinder' 2025-06-22 20:25:56.551944 | orchestrator | + echo 2025-06-22 20:25:56.551953 | orchestrator | + openstack volume service list 2025-06-22 20:25:59.658106 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-06-22 20:25:59.658216 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2025-06-22 20:25:59.658231 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-06-22 20:25:59.658242 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2025-06-22T20:25:53.000000 | 2025-06-22 20:25:59.658253 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2025-06-22T20:25:52.000000 | 2025-06-22 20:25:59.658264 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2025-06-22T20:25:52.000000 | 2025-06-22 20:25:59.658275 | orchestrator | | cinder-volume | testbed-node-3@rbd-volumes | nova | enabled | up | 2025-06-22T20:25:52.000000 | 2025-06-22 20:25:59.658285 | orchestrator | | cinder-volume | testbed-node-4@rbd-volumes | nova | enabled | up | 2025-06-22T20:25:54.000000 | 2025-06-22 20:25:59.658315 | orchestrator | | cinder-volume | testbed-node-5@rbd-volumes | nova | enabled | up | 2025-06-22T20:25:56.000000 | 2025-06-22 20:25:59.658326 | orchestrator | | cinder-backup | testbed-node-3 | nova | enabled | up | 2025-06-22T20:25:52.000000 | 2025-06-22 20:25:59.658337 | orchestrator | | cinder-backup | testbed-node-4 | nova | enabled | up | 2025-06-22T20:25:52.000000 | 2025-06-22 20:25:59.658347 | orchestrator | | cinder-backup | testbed-node-5 | nova | enabled | up | 2025-06-22T20:25:53.000000 | 2025-06-22 20:25:59.658358 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-06-22 20:25:59.916084 | orchestrator | 2025-06-22 20:25:59.916182 | orchestrator | # Neutron 2025-06-22 20:25:59.916196 | orchestrator | 2025-06-22 20:25:59.916209 | orchestrator | + echo 2025-06-22 20:25:59.916220 | orchestrator | + echo '# Neutron' 2025-06-22 20:25:59.916231 | orchestrator | + echo 2025-06-22 20:25:59.916242 | orchestrator | + openstack network agent list 2025-06-22 20:26:03.159029 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-06-22 20:26:03.159136 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2025-06-22 20:26:03.159151 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-06-22 20:26:03.159163 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2025-06-22 20:26:03.159174 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2025-06-22 20:26:03.159209 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2025-06-22 20:26:03.159221 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2025-06-22 20:26:03.159231 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2025-06-22 20:26:03.159242 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2025-06-22 20:26:03.159253 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2025-06-22 20:26:03.159263 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2025-06-22 20:26:03.159274 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2025-06-22 20:26:03.159284 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-06-22 20:26:03.411928 | orchestrator | + openstack network service provider list 2025-06-22 20:26:06.465298 | orchestrator | +---------------+------+---------+ 2025-06-22 20:26:06.465410 | orchestrator | | Service Type | Name | Default | 2025-06-22 20:26:06.465426 | orchestrator | +---------------+------+---------+ 2025-06-22 20:26:06.465438 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2025-06-22 20:26:06.465449 | orchestrator | +---------------+------+---------+ 2025-06-22 20:26:06.711102 | orchestrator | 2025-06-22 20:26:06.711215 | orchestrator | # Nova 2025-06-22 20:26:06.711248 | orchestrator | 2025-06-22 20:26:06.711278 | orchestrator | + echo 2025-06-22 20:26:06.711307 | orchestrator | + echo '# Nova' 2025-06-22 20:26:06.711338 | orchestrator | + echo 2025-06-22 20:26:06.711369 | orchestrator | + openstack compute service list 2025-06-22 20:26:09.884795 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-06-22 20:26:09.884903 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2025-06-22 20:26:09.884918 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-06-22 20:26:09.884931 | orchestrator | | 65903f51-2341-4db1-8a04-a16ba617807e | nova-scheduler | testbed-node-0 | internal | enabled | up | 2025-06-22T20:26:09.000000 | 2025-06-22 20:26:09.884943 | orchestrator | | e43230c5-3cce-4f51-8823-c5550772155b | nova-scheduler | testbed-node-1 | internal | enabled | up | 2025-06-22T20:26:04.000000 | 2025-06-22 20:26:09.884954 | orchestrator | | 534ab234-98b2-4232-a16a-72922e55210a | nova-scheduler | testbed-node-2 | internal | enabled | up | 2025-06-22T20:26:05.000000 | 2025-06-22 20:26:09.884965 | orchestrator | | e8703159-3dbd-4a36-a35a-bc48b6ace7f0 | nova-conductor | testbed-node-0 | internal | enabled | up | 2025-06-22T20:26:04.000000 | 2025-06-22 20:26:09.884976 | orchestrator | | e6a0d323-26d7-4946-82e1-a5d59b63c5e1 | nova-conductor | testbed-node-2 | internal | enabled | up | 2025-06-22T20:26:06.000000 | 2025-06-22 20:26:09.884988 | orchestrator | | 6ecff8ad-a0bd-48e0-9937-4c05a573bdd9 | nova-conductor | testbed-node-1 | internal | enabled | up | 2025-06-22T20:26:06.000000 | 2025-06-22 20:26:09.885017 | orchestrator | | 9874e1e1-2b52-4414-ae07-635fa944fbd8 | nova-compute | testbed-node-4 | nova | enabled | up | 2025-06-22T20:26:08.000000 | 2025-06-22 20:26:09.885029 | orchestrator | | b8c0f86f-ecd4-4cfa-b745-f292346c838a | nova-compute | testbed-node-3 | nova | enabled | up | 2025-06-22T20:26:08.000000 | 2025-06-22 20:26:09.885040 | orchestrator | | 020b55d4-b9c3-48ac-84b7-1b5f71f552d8 | nova-compute | testbed-node-5 | nova | enabled | up | 2025-06-22T20:25:59.000000 | 2025-06-22 20:26:09.885077 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-06-22 20:26:10.121151 | orchestrator | + openstack hypervisor list 2025-06-22 20:26:14.467760 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-06-22 20:26:14.467874 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2025-06-22 20:26:14.467888 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-06-22 20:26:14.467901 | orchestrator | | c150bc61-1f0d-4c55-a4ee-2c774ad0e296 | testbed-node-4 | QEMU | 192.168.16.14 | up | 2025-06-22 20:26:14.467912 | orchestrator | | 4e0e1530-e141-4a66-ad07-a41b3036e78a | testbed-node-3 | QEMU | 192.168.16.13 | up | 2025-06-22 20:26:14.467923 | orchestrator | | 853479d4-ac2e-4314-9ad1-b649acd4745f | testbed-node-5 | QEMU | 192.168.16.15 | up | 2025-06-22 20:26:14.467934 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-06-22 20:26:14.723557 | orchestrator | 2025-06-22 20:26:14.723661 | orchestrator | # Run OpenStack test play 2025-06-22 20:26:14.723676 | orchestrator | 2025-06-22 20:26:14.723688 | orchestrator | + echo 2025-06-22 20:26:14.723700 | orchestrator | + echo '# Run OpenStack test play' 2025-06-22 20:26:14.723713 | orchestrator | + echo 2025-06-22 20:26:14.723724 | orchestrator | + osism apply --environment openstack test 2025-06-22 20:26:16.391154 | orchestrator | 2025-06-22 20:26:16 | INFO  | Trying to run play test in environment openstack 2025-06-22 20:26:16.396722 | orchestrator | Registering Redlock._acquired_script 2025-06-22 20:26:16.396774 | orchestrator | Registering Redlock._extend_script 2025-06-22 20:26:16.396786 | orchestrator | Registering Redlock._release_script 2025-06-22 20:26:16.462067 | orchestrator | 2025-06-22 20:26:16 | INFO  | Task a4aadfc4-1bd1-4e09-8741-d1382928e14c (test) was prepared for execution. 2025-06-22 20:26:16.462155 | orchestrator | 2025-06-22 20:26:16 | INFO  | It takes a moment until task a4aadfc4-1bd1-4e09-8741-d1382928e14c (test) has been started and output is visible here. 2025-06-22 20:26:20.280075 | orchestrator | 2025-06-22 20:26:20.280184 | orchestrator | PLAY [Create test project] ***************************************************** 2025-06-22 20:26:20.281192 | orchestrator | 2025-06-22 20:26:20.283728 | orchestrator | TASK [Create test domain] ****************************************************** 2025-06-22 20:26:20.283775 | orchestrator | Sunday 22 June 2025 20:26:20 +0000 (0:00:00.076) 0:00:00.077 *********** 2025-06-22 20:26:23.893372 | orchestrator | changed: [localhost] 2025-06-22 20:26:23.894423 | orchestrator | 2025-06-22 20:26:23.894575 | orchestrator | TASK [Create test-admin user] ************************************************** 2025-06-22 20:26:23.894962 | orchestrator | Sunday 22 June 2025 20:26:23 +0000 (0:00:03.614) 0:00:03.691 *********** 2025-06-22 20:26:28.050432 | orchestrator | changed: [localhost] 2025-06-22 20:26:28.052086 | orchestrator | 2025-06-22 20:26:28.052785 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2025-06-22 20:26:28.054120 | orchestrator | Sunday 22 June 2025 20:26:28 +0000 (0:00:04.157) 0:00:07.848 *********** 2025-06-22 20:26:34.235503 | orchestrator | changed: [localhost] 2025-06-22 20:26:34.237615 | orchestrator | 2025-06-22 20:26:34.237666 | orchestrator | TASK [Create test project] ***************************************************** 2025-06-22 20:26:34.238261 | orchestrator | Sunday 22 June 2025 20:26:34 +0000 (0:00:06.185) 0:00:14.033 *********** 2025-06-22 20:26:38.240125 | orchestrator | changed: [localhost] 2025-06-22 20:26:38.240241 | orchestrator | 2025-06-22 20:26:38.241131 | orchestrator | TASK [Create test user] ******************************************************** 2025-06-22 20:26:38.243121 | orchestrator | Sunday 22 June 2025 20:26:38 +0000 (0:00:04.000) 0:00:18.034 *********** 2025-06-22 20:26:42.379585 | orchestrator | changed: [localhost] 2025-06-22 20:26:42.380957 | orchestrator | 2025-06-22 20:26:42.382259 | orchestrator | TASK [Add member roles to user test] ******************************************* 2025-06-22 20:26:42.383339 | orchestrator | Sunday 22 June 2025 20:26:42 +0000 (0:00:04.142) 0:00:22.177 *********** 2025-06-22 20:26:53.917345 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2025-06-22 20:26:53.917759 | orchestrator | changed: [localhost] => (item=member) 2025-06-22 20:26:53.917785 | orchestrator | changed: [localhost] => (item=creator) 2025-06-22 20:26:53.917811 | orchestrator | 2025-06-22 20:26:53.918340 | orchestrator | TASK [Create test server group] ************************************************ 2025-06-22 20:26:53.919967 | orchestrator | Sunday 22 June 2025 20:26:53 +0000 (0:00:11.536) 0:00:33.713 *********** 2025-06-22 20:26:58.581989 | orchestrator | changed: [localhost] 2025-06-22 20:26:58.582145 | orchestrator | 2025-06-22 20:26:58.582185 | orchestrator | TASK [Create ssh security group] *********************************************** 2025-06-22 20:26:58.582207 | orchestrator | Sunday 22 June 2025 20:26:58 +0000 (0:00:04.661) 0:00:38.375 *********** 2025-06-22 20:27:03.938490 | orchestrator | changed: [localhost] 2025-06-22 20:27:03.938675 | orchestrator | 2025-06-22 20:27:03.940890 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2025-06-22 20:27:03.942168 | orchestrator | Sunday 22 June 2025 20:27:03 +0000 (0:00:05.360) 0:00:43.735 *********** 2025-06-22 20:27:08.738747 | orchestrator | changed: [localhost] 2025-06-22 20:27:08.738906 | orchestrator | 2025-06-22 20:27:08.739489 | orchestrator | TASK [Create icmp security group] ********************************************** 2025-06-22 20:27:08.740731 | orchestrator | Sunday 22 June 2025 20:27:08 +0000 (0:00:04.800) 0:00:48.535 *********** 2025-06-22 20:27:12.537750 | orchestrator | changed: [localhost] 2025-06-22 20:27:12.537971 | orchestrator | 2025-06-22 20:27:12.539093 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2025-06-22 20:27:12.540663 | orchestrator | Sunday 22 June 2025 20:27:12 +0000 (0:00:03.800) 0:00:52.336 *********** 2025-06-22 20:27:16.447609 | orchestrator | changed: [localhost] 2025-06-22 20:27:16.448142 | orchestrator | 2025-06-22 20:27:16.449158 | orchestrator | TASK [Create test keypair] ***************************************************** 2025-06-22 20:27:16.450369 | orchestrator | Sunday 22 June 2025 20:27:16 +0000 (0:00:03.908) 0:00:56.244 *********** 2025-06-22 20:27:20.553667 | orchestrator | changed: [localhost] 2025-06-22 20:27:20.553974 | orchestrator | 2025-06-22 20:27:20.554821 | orchestrator | TASK [Create test network topology] ******************************************** 2025-06-22 20:27:20.555467 | orchestrator | Sunday 22 June 2025 20:27:20 +0000 (0:00:04.107) 0:01:00.352 *********** 2025-06-22 20:27:36.038827 | orchestrator | changed: [localhost] 2025-06-22 20:27:36.038996 | orchestrator | 2025-06-22 20:27:36.039077 | orchestrator | TASK [Create test instances] *************************************************** 2025-06-22 20:27:36.040495 | orchestrator | Sunday 22 June 2025 20:27:36 +0000 (0:00:15.482) 0:01:15.834 *********** 2025-06-22 20:29:50.154149 | orchestrator | changed: [localhost] => (item=test) 2025-06-22 20:29:50.154803 | orchestrator | changed: [localhost] => (item=test-1) 2025-06-22 20:29:50.157272 | orchestrator | changed: [localhost] => (item=test-2) 2025-06-22 20:29:50.157678 | orchestrator | 2025-06-22 20:29:50.159315 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-06-22 20:30:20.156222 | orchestrator | changed: [localhost] => (item=test-3) 2025-06-22 20:30:20.156333 | orchestrator | 2025-06-22 20:30:20.156348 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-06-22 20:30:49.996522 | orchestrator | changed: [localhost] => (item=test-4) 2025-06-22 20:30:49.996695 | orchestrator | 2025-06-22 20:30:49.997428 | orchestrator | TASK [Add metadata to instances] *********************************************** 2025-06-22 20:30:49.998629 | orchestrator | Sunday 22 June 2025 20:30:49 +0000 (0:03:13.957) 0:04:29.792 *********** 2025-06-22 20:31:13.253483 | orchestrator | changed: [localhost] => (item=test) 2025-06-22 20:31:13.253607 | orchestrator | changed: [localhost] => (item=test-1) 2025-06-22 20:31:13.253622 | orchestrator | changed: [localhost] => (item=test-2) 2025-06-22 20:31:13.253633 | orchestrator | changed: [localhost] => (item=test-3) 2025-06-22 20:31:13.254152 | orchestrator | changed: [localhost] => (item=test-4) 2025-06-22 20:31:13.254910 | orchestrator | 2025-06-22 20:31:13.256103 | orchestrator | TASK [Add tag to instances] **************************************************** 2025-06-22 20:31:13.257152 | orchestrator | Sunday 22 June 2025 20:31:13 +0000 (0:00:23.253) 0:04:53.046 *********** 2025-06-22 20:31:45.385794 | orchestrator | changed: [localhost] => (item=test) 2025-06-22 20:31:45.386053 | orchestrator | changed: [localhost] => (item=test-1) 2025-06-22 20:31:45.386138 | orchestrator | changed: [localhost] => (item=test-2) 2025-06-22 20:31:45.387495 | orchestrator | changed: [localhost] => (item=test-3) 2025-06-22 20:31:45.388804 | orchestrator | changed: [localhost] => (item=test-4) 2025-06-22 20:31:45.389392 | orchestrator | 2025-06-22 20:31:45.390205 | orchestrator | TASK [Create test volume] ****************************************************** 2025-06-22 20:31:45.391024 | orchestrator | Sunday 22 June 2025 20:31:45 +0000 (0:00:32.136) 0:05:25.182 *********** 2025-06-22 20:31:51.965009 | orchestrator | changed: [localhost] 2025-06-22 20:31:51.965755 | orchestrator | 2025-06-22 20:31:51.965789 | orchestrator | TASK [Attach test volume] ****************************************************** 2025-06-22 20:31:51.965803 | orchestrator | Sunday 22 June 2025 20:31:51 +0000 (0:00:06.578) 0:05:31.760 *********** 2025-06-22 20:32:05.540365 | orchestrator | changed: [localhost] 2025-06-22 20:32:05.540445 | orchestrator | 2025-06-22 20:32:05.540452 | orchestrator | TASK [Create floating ip address] ********************************************** 2025-06-22 20:32:05.540458 | orchestrator | Sunday 22 June 2025 20:32:05 +0000 (0:00:13.573) 0:05:45.334 *********** 2025-06-22 20:32:10.693782 | orchestrator | ok: [localhost] 2025-06-22 20:32:10.694601 | orchestrator | 2025-06-22 20:32:10.694618 | orchestrator | TASK [Print floating ip address] *********************************************** 2025-06-22 20:32:10.694895 | orchestrator | Sunday 22 June 2025 20:32:10 +0000 (0:00:05.154) 0:05:50.488 *********** 2025-06-22 20:32:10.732016 | orchestrator | ok: [localhost] => { 2025-06-22 20:32:10.732218 | orchestrator |  "msg": "192.168.112.114" 2025-06-22 20:32:10.733436 | orchestrator | } 2025-06-22 20:32:10.734293 | orchestrator | 2025-06-22 20:32:10.735012 | orchestrator | PLAY RECAP ********************************************************************* 2025-06-22 20:32:10.735290 | orchestrator | 2025-06-22 20:32:10 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-06-22 20:32:10.735600 | orchestrator | 2025-06-22 20:32:10 | INFO  | Please wait and do not abort execution. 2025-06-22 20:32:10.736991 | orchestrator | localhost : ok=20  changed=18  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-06-22 20:32:10.738713 | orchestrator | 2025-06-22 20:32:10.739572 | orchestrator | 2025-06-22 20:32:10.740414 | orchestrator | TASKS RECAP ******************************************************************** 2025-06-22 20:32:10.741410 | orchestrator | Sunday 22 June 2025 20:32:10 +0000 (0:00:00.040) 0:05:50.529 *********** 2025-06-22 20:32:10.741952 | orchestrator | =============================================================================== 2025-06-22 20:32:10.743013 | orchestrator | Create test instances ------------------------------------------------- 193.96s 2025-06-22 20:32:10.743780 | orchestrator | Add tag to instances --------------------------------------------------- 32.14s 2025-06-22 20:32:10.744458 | orchestrator | Add metadata to instances ---------------------------------------------- 23.25s 2025-06-22 20:32:10.744540 | orchestrator | Create test network topology ------------------------------------------- 15.48s 2025-06-22 20:32:10.745376 | orchestrator | Attach test volume ----------------------------------------------------- 13.57s 2025-06-22 20:32:10.746578 | orchestrator | Add member roles to user test ------------------------------------------ 11.54s 2025-06-22 20:32:10.747645 | orchestrator | Create test volume ------------------------------------------------------ 6.58s 2025-06-22 20:32:10.748458 | orchestrator | Add manager role to user test-admin ------------------------------------- 6.19s 2025-06-22 20:32:10.749147 | orchestrator | Create ssh security group ----------------------------------------------- 5.36s 2025-06-22 20:32:10.749776 | orchestrator | Create floating ip address ---------------------------------------------- 5.15s 2025-06-22 20:32:10.750641 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.80s 2025-06-22 20:32:10.751539 | orchestrator | Create test server group ------------------------------------------------ 4.66s 2025-06-22 20:32:10.751782 | orchestrator | Create test-admin user -------------------------------------------------- 4.16s 2025-06-22 20:32:10.752303 | orchestrator | Create test user -------------------------------------------------------- 4.14s 2025-06-22 20:32:10.752782 | orchestrator | Create test keypair ----------------------------------------------------- 4.11s 2025-06-22 20:32:10.753925 | orchestrator | Create test project ----------------------------------------------------- 4.00s 2025-06-22 20:32:10.754539 | orchestrator | Add rule to icmp security group ----------------------------------------- 3.91s 2025-06-22 20:32:10.755183 | orchestrator | Create icmp security group ---------------------------------------------- 3.80s 2025-06-22 20:32:10.755886 | orchestrator | Create test domain ------------------------------------------------------ 3.61s 2025-06-22 20:32:10.756393 | orchestrator | Print floating ip address ----------------------------------------------- 0.04s 2025-06-22 20:32:11.308700 | orchestrator | + server_list 2025-06-22 20:32:11.308766 | orchestrator | + openstack --os-cloud test server list 2025-06-22 20:32:15.090672 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-06-22 20:32:15.090778 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2025-06-22 20:32:15.090792 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-06-22 20:32:15.090802 | orchestrator | | 18cac18b-d343-473e-86f3-fd62a5b7f834 | test-4 | ACTIVE | auto_allocated_network=10.42.0.42, 192.168.112.193 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-06-22 20:32:15.090812 | orchestrator | | c0c53c95-9641-43d9-a762-355acb8e008b | test-3 | ACTIVE | auto_allocated_network=10.42.0.19, 192.168.112.130 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-06-22 20:32:15.090822 | orchestrator | | 06f77372-c332-4e82-a6a1-d421535a275a | test-2 | ACTIVE | auto_allocated_network=10.42.0.4, 192.168.112.170 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-06-22 20:32:15.090831 | orchestrator | | 2b06a803-baa0-4830-8914-75b7cda78db0 | test-1 | ACTIVE | auto_allocated_network=10.42.0.45, 192.168.112.180 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-06-22 20:32:15.090841 | orchestrator | | 6b054acc-ab7e-4d53-9d0b-8c687990cc42 | test | ACTIVE | auto_allocated_network=10.42.0.40, 192.168.112.114 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-06-22 20:32:15.090850 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-06-22 20:32:15.366452 | orchestrator | + openstack --os-cloud test server show test 2025-06-22 20:32:18.826991 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-22 20:32:18.827093 | orchestrator | | Field | Value | 2025-06-22 20:32:18.827108 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-22 20:32:18.827119 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-06-22 20:32:18.827149 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-06-22 20:32:18.827160 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-06-22 20:32:18.827180 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2025-06-22 20:32:18.827192 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-06-22 20:32:18.827202 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-06-22 20:32:18.827214 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-06-22 20:32:18.827235 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-06-22 20:32:18.827276 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-06-22 20:32:18.827298 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-06-22 20:32:18.827319 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-06-22 20:32:18.827338 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-06-22 20:32:18.827368 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-06-22 20:32:18.827394 | orchestrator | | OS-EXT-STS:task_state | None | 2025-06-22 20:32:18.827413 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-06-22 20:32:18.827433 | orchestrator | | OS-SRV-USG:launched_at | 2025-06-22T20:28:06.000000 | 2025-06-22 20:32:18.827453 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-06-22 20:32:18.827474 | orchestrator | | accessIPv4 | | 2025-06-22 20:32:18.827539 | orchestrator | | accessIPv6 | | 2025-06-22 20:32:18.827561 | orchestrator | | addresses | auto_allocated_network=10.42.0.40, 192.168.112.114 | 2025-06-22 20:32:18.827591 | orchestrator | | config_drive | | 2025-06-22 20:32:18.827604 | orchestrator | | created | 2025-06-22T20:27:44Z | 2025-06-22 20:32:18.827628 | orchestrator | | description | None | 2025-06-22 20:32:18.827640 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-06-22 20:32:18.827652 | orchestrator | | hostId | 176b898ed8990bbed8fb7fcd29c55181f5df6826a4f71e1aca626ef4 | 2025-06-22 20:32:18.827670 | orchestrator | | host_status | None | 2025-06-22 20:32:18.827683 | orchestrator | | id | 6b054acc-ab7e-4d53-9d0b-8c687990cc42 | 2025-06-22 20:32:18.827695 | orchestrator | | image | Cirros 0.6.2 (0ae3dd80-5c9a-402b-bbf0-d8bc864cc9dd) | 2025-06-22 20:32:18.827708 | orchestrator | | key_name | test | 2025-06-22 20:32:18.827727 | orchestrator | | locked | False | 2025-06-22 20:32:18.827748 | orchestrator | | locked_reason | None | 2025-06-22 20:32:18.827766 | orchestrator | | name | test | 2025-06-22 20:32:18.827787 | orchestrator | | pinned_availability_zone | None | 2025-06-22 20:32:18.827807 | orchestrator | | progress | 0 | 2025-06-22 20:32:18.827818 | orchestrator | | project_id | f9cf99295c744af6bfa1f745635ad94f | 2025-06-22 20:32:18.827828 | orchestrator | | properties | hostname='test' | 2025-06-22 20:32:18.827844 | orchestrator | | security_groups | name='icmp' | 2025-06-22 20:32:18.827855 | orchestrator | | | name='ssh' | 2025-06-22 20:32:18.827866 | orchestrator | | server_groups | None | 2025-06-22 20:32:18.827877 | orchestrator | | status | ACTIVE | 2025-06-22 20:32:18.827887 | orchestrator | | tags | test | 2025-06-22 20:32:18.827898 | orchestrator | | trusted_image_certificates | None | 2025-06-22 20:32:18.827909 | orchestrator | | updated | 2025-06-22T20:30:54Z | 2025-06-22 20:32:18.827925 | orchestrator | | user_id | f4f981b6fcad42009e71b45074089d83 | 2025-06-22 20:32:18.827944 | orchestrator | | volumes_attached | delete_on_termination='False', id='7dc722f7-6ffe-4250-be18-3309554c502f' | 2025-06-22 20:32:18.829876 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-22 20:32:19.079809 | orchestrator | + openstack --os-cloud test server show test-1 2025-06-22 20:32:22.111581 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-22 20:32:22.111647 | orchestrator | | Field | Value | 2025-06-22 20:32:22.111660 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-22 20:32:22.111665 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-06-22 20:32:22.111670 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-06-22 20:32:22.111674 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-06-22 20:32:22.111679 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2025-06-22 20:32:22.111684 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-06-22 20:32:22.111689 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-06-22 20:32:22.111707 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-06-22 20:32:22.111713 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-06-22 20:32:22.111728 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-06-22 20:32:22.111733 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-06-22 20:32:22.111738 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-06-22 20:32:22.111743 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-06-22 20:32:22.111748 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-06-22 20:32:22.111758 | orchestrator | | OS-EXT-STS:task_state | None | 2025-06-22 20:32:22.111763 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-06-22 20:32:22.111768 | orchestrator | | OS-SRV-USG:launched_at | 2025-06-22T20:28:49.000000 | 2025-06-22 20:32:22.111777 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-06-22 20:32:22.111781 | orchestrator | | accessIPv4 | | 2025-06-22 20:32:22.111786 | orchestrator | | accessIPv6 | | 2025-06-22 20:32:22.111791 | orchestrator | | addresses | auto_allocated_network=10.42.0.45, 192.168.112.180 | 2025-06-22 20:32:22.111800 | orchestrator | | config_drive | | 2025-06-22 20:32:22.111805 | orchestrator | | created | 2025-06-22T20:28:27Z | 2025-06-22 20:32:22.111812 | orchestrator | | description | None | 2025-06-22 20:32:22.111817 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-06-22 20:32:22.111822 | orchestrator | | hostId | 595355eea631ae9697bd6d561fd7418ed2002ea80594ce3b0138663e | 2025-06-22 20:32:22.111827 | orchestrator | | host_status | None | 2025-06-22 20:32:22.111831 | orchestrator | | id | 2b06a803-baa0-4830-8914-75b7cda78db0 | 2025-06-22 20:32:22.111839 | orchestrator | | image | Cirros 0.6.2 (0ae3dd80-5c9a-402b-bbf0-d8bc864cc9dd) | 2025-06-22 20:32:22.111844 | orchestrator | | key_name | test | 2025-06-22 20:32:22.111849 | orchestrator | | locked | False | 2025-06-22 20:32:22.111854 | orchestrator | | locked_reason | None | 2025-06-22 20:32:22.111858 | orchestrator | | name | test-1 | 2025-06-22 20:32:22.111866 | orchestrator | | pinned_availability_zone | None | 2025-06-22 20:32:22.111871 | orchestrator | | progress | 0 | 2025-06-22 20:32:22.111879 | orchestrator | | project_id | f9cf99295c744af6bfa1f745635ad94f | 2025-06-22 20:32:22.111883 | orchestrator | | properties | hostname='test-1' | 2025-06-22 20:32:22.111888 | orchestrator | | security_groups | name='icmp' | 2025-06-22 20:32:22.111892 | orchestrator | | | name='ssh' | 2025-06-22 20:32:22.111900 | orchestrator | | server_groups | None | 2025-06-22 20:32:22.111904 | orchestrator | | status | ACTIVE | 2025-06-22 20:32:22.111909 | orchestrator | | tags | test | 2025-06-22 20:32:22.111913 | orchestrator | | trusted_image_certificates | None | 2025-06-22 20:32:22.111918 | orchestrator | | updated | 2025-06-22T20:30:59Z | 2025-06-22 20:32:22.111925 | orchestrator | | user_id | f4f981b6fcad42009e71b45074089d83 | 2025-06-22 20:32:22.111930 | orchestrator | | volumes_attached | | 2025-06-22 20:32:22.115925 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-22 20:32:22.372923 | orchestrator | + openstack --os-cloud test server show test-2 2025-06-22 20:32:25.454412 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-22 20:32:25.454577 | orchestrator | | Field | Value | 2025-06-22 20:32:25.454599 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-22 20:32:25.454672 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-06-22 20:32:25.454687 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-06-22 20:32:25.454698 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-06-22 20:32:25.454709 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2025-06-22 20:32:25.454719 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-06-22 20:32:25.454730 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-06-22 20:32:25.454756 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-06-22 20:32:25.454768 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-06-22 20:32:25.454823 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-06-22 20:32:25.454837 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-06-22 20:32:25.454858 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-06-22 20:32:25.454869 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-06-22 20:32:25.454880 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-06-22 20:32:25.454891 | orchestrator | | OS-EXT-STS:task_state | None | 2025-06-22 20:32:25.454902 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-06-22 20:32:25.454913 | orchestrator | | OS-SRV-USG:launched_at | 2025-06-22T20:29:29.000000 | 2025-06-22 20:32:25.454923 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-06-22 20:32:25.454934 | orchestrator | | accessIPv4 | | 2025-06-22 20:32:25.454945 | orchestrator | | accessIPv6 | | 2025-06-22 20:32:25.454956 | orchestrator | | addresses | auto_allocated_network=10.42.0.4, 192.168.112.170 | 2025-06-22 20:32:25.454974 | orchestrator | | config_drive | | 2025-06-22 20:32:25.454993 | orchestrator | | created | 2025-06-22T20:29:07Z | 2025-06-22 20:32:25.455004 | orchestrator | | description | None | 2025-06-22 20:32:25.455086 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-06-22 20:32:25.455101 | orchestrator | | hostId | cb44859b7c1cd93aa00d28ea33b048da312d49d7a6aecdf768045ff6 | 2025-06-22 20:32:25.455112 | orchestrator | | host_status | None | 2025-06-22 20:32:25.455123 | orchestrator | | id | 06f77372-c332-4e82-a6a1-d421535a275a | 2025-06-22 20:32:25.455134 | orchestrator | | image | Cirros 0.6.2 (0ae3dd80-5c9a-402b-bbf0-d8bc864cc9dd) | 2025-06-22 20:32:25.455145 | orchestrator | | key_name | test | 2025-06-22 20:32:25.455156 | orchestrator | | locked | False | 2025-06-22 20:32:25.455167 | orchestrator | | locked_reason | None | 2025-06-22 20:32:25.455186 | orchestrator | | name | test-2 | 2025-06-22 20:32:25.455303 | orchestrator | | pinned_availability_zone | None | 2025-06-22 20:32:25.455320 | orchestrator | | progress | 0 | 2025-06-22 20:32:25.455331 | orchestrator | | project_id | f9cf99295c744af6bfa1f745635ad94f | 2025-06-22 20:32:25.455342 | orchestrator | | properties | hostname='test-2' | 2025-06-22 20:32:25.455353 | orchestrator | | security_groups | name='icmp' | 2025-06-22 20:32:25.455387 | orchestrator | | | name='ssh' | 2025-06-22 20:32:25.455399 | orchestrator | | server_groups | None | 2025-06-22 20:32:25.455410 | orchestrator | | status | ACTIVE | 2025-06-22 20:32:25.455432 | orchestrator | | tags | test | 2025-06-22 20:32:25.455444 | orchestrator | | trusted_image_certificates | None | 2025-06-22 20:32:25.455455 | orchestrator | | updated | 2025-06-22T20:31:04Z | 2025-06-22 20:32:25.455563 | orchestrator | | user_id | f4f981b6fcad42009e71b45074089d83 | 2025-06-22 20:32:25.455588 | orchestrator | | volumes_attached | | 2025-06-22 20:32:25.455776 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-22 20:32:25.735004 | orchestrator | + openstack --os-cloud test server show test-3 2025-06-22 20:32:28.809791 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-22 20:32:28.809929 | orchestrator | | Field | Value | 2025-06-22 20:32:28.809945 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-22 20:32:28.809957 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-06-22 20:32:28.809968 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-06-22 20:32:28.809979 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-06-22 20:32:28.809990 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2025-06-22 20:32:28.810074 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-06-22 20:32:28.810089 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-06-22 20:32:28.810115 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-06-22 20:32:28.810126 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-06-22 20:32:28.810156 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-06-22 20:32:28.810167 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-06-22 20:32:28.810178 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-06-22 20:32:28.810189 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-06-22 20:32:28.810199 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-06-22 20:32:28.810210 | orchestrator | | OS-EXT-STS:task_state | None | 2025-06-22 20:32:28.810221 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-06-22 20:32:28.810240 | orchestrator | | OS-SRV-USG:launched_at | 2025-06-22T20:30:05.000000 | 2025-06-22 20:32:28.810251 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-06-22 20:32:28.810266 | orchestrator | | accessIPv4 | | 2025-06-22 20:32:28.810278 | orchestrator | | accessIPv6 | | 2025-06-22 20:32:28.810289 | orchestrator | | addresses | auto_allocated_network=10.42.0.19, 192.168.112.130 | 2025-06-22 20:32:28.810306 | orchestrator | | config_drive | | 2025-06-22 20:32:28.810318 | orchestrator | | created | 2025-06-22T20:29:49Z | 2025-06-22 20:32:28.810330 | orchestrator | | description | None | 2025-06-22 20:32:28.810342 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-06-22 20:32:28.810355 | orchestrator | | hostId | 176b898ed8990bbed8fb7fcd29c55181f5df6826a4f71e1aca626ef4 | 2025-06-22 20:32:28.810367 | orchestrator | | host_status | None | 2025-06-22 20:32:28.810397 | orchestrator | | id | c0c53c95-9641-43d9-a762-355acb8e008b | 2025-06-22 20:32:28.810410 | orchestrator | | image | Cirros 0.6.2 (0ae3dd80-5c9a-402b-bbf0-d8bc864cc9dd) | 2025-06-22 20:32:28.810422 | orchestrator | | key_name | test | 2025-06-22 20:32:28.810440 | orchestrator | | locked | False | 2025-06-22 20:32:28.810453 | orchestrator | | locked_reason | None | 2025-06-22 20:32:28.810465 | orchestrator | | name | test-3 | 2025-06-22 20:32:28.810524 | orchestrator | | pinned_availability_zone | None | 2025-06-22 20:32:28.810539 | orchestrator | | progress | 0 | 2025-06-22 20:32:28.810552 | orchestrator | | project_id | f9cf99295c744af6bfa1f745635ad94f | 2025-06-22 20:32:28.810564 | orchestrator | | properties | hostname='test-3' | 2025-06-22 20:32:28.810577 | orchestrator | | security_groups | name='icmp' | 2025-06-22 20:32:28.810597 | orchestrator | | | name='ssh' | 2025-06-22 20:32:28.810609 | orchestrator | | server_groups | None | 2025-06-22 20:32:28.810622 | orchestrator | | status | ACTIVE | 2025-06-22 20:32:28.810634 | orchestrator | | tags | test | 2025-06-22 20:32:28.810652 | orchestrator | | trusted_image_certificates | None | 2025-06-22 20:32:28.810664 | orchestrator | | updated | 2025-06-22T20:31:08Z | 2025-06-22 20:32:28.810683 | orchestrator | | user_id | f4f981b6fcad42009e71b45074089d83 | 2025-06-22 20:32:28.810696 | orchestrator | | volumes_attached | | 2025-06-22 20:32:28.817196 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-22 20:32:29.069349 | orchestrator | + openstack --os-cloud test server show test-4 2025-06-22 20:32:32.238235 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-22 20:32:32.238358 | orchestrator | | Field | Value | 2025-06-22 20:32:32.238375 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-22 20:32:32.238387 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-06-22 20:32:32.238399 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-06-22 20:32:32.238410 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-06-22 20:32:32.238421 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2025-06-22 20:32:32.238432 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-06-22 20:32:32.238443 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-06-22 20:32:32.238455 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-06-22 20:32:32.238466 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-06-22 20:32:32.238555 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-06-22 20:32:32.238579 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-06-22 20:32:32.238591 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-06-22 20:32:32.238601 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-06-22 20:32:32.238612 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-06-22 20:32:32.238623 | orchestrator | | OS-EXT-STS:task_state | None | 2025-06-22 20:32:32.238634 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-06-22 20:32:32.238650 | orchestrator | | OS-SRV-USG:launched_at | 2025-06-22T20:30:39.000000 | 2025-06-22 20:32:32.238662 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-06-22 20:32:32.238673 | orchestrator | | accessIPv4 | | 2025-06-22 20:32:32.238683 | orchestrator | | accessIPv6 | | 2025-06-22 20:32:32.238695 | orchestrator | | addresses | auto_allocated_network=10.42.0.42, 192.168.112.193 | 2025-06-22 20:32:32.238720 | orchestrator | | config_drive | | 2025-06-22 20:32:32.238732 | orchestrator | | created | 2025-06-22T20:30:22Z | 2025-06-22 20:32:32.238743 | orchestrator | | description | None | 2025-06-22 20:32:32.238754 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-06-22 20:32:32.238766 | orchestrator | | hostId | 595355eea631ae9697bd6d561fd7418ed2002ea80594ce3b0138663e | 2025-06-22 20:32:32.238779 | orchestrator | | host_status | None | 2025-06-22 20:32:32.238791 | orchestrator | | id | 18cac18b-d343-473e-86f3-fd62a5b7f834 | 2025-06-22 20:32:32.238809 | orchestrator | | image | Cirros 0.6.2 (0ae3dd80-5c9a-402b-bbf0-d8bc864cc9dd) | 2025-06-22 20:32:32.238822 | orchestrator | | key_name | test | 2025-06-22 20:32:32.238835 | orchestrator | | locked | False | 2025-06-22 20:32:32.238854 | orchestrator | | locked_reason | None | 2025-06-22 20:32:32.238867 | orchestrator | | name | test-4 | 2025-06-22 20:32:32.238885 | orchestrator | | pinned_availability_zone | None | 2025-06-22 20:32:32.238899 | orchestrator | | progress | 0 | 2025-06-22 20:32:32.238911 | orchestrator | | project_id | f9cf99295c744af6bfa1f745635ad94f | 2025-06-22 20:32:32.238923 | orchestrator | | properties | hostname='test-4' | 2025-06-22 20:32:32.238936 | orchestrator | | security_groups | name='icmp' | 2025-06-22 20:32:32.238948 | orchestrator | | | name='ssh' | 2025-06-22 20:32:32.238965 | orchestrator | | server_groups | None | 2025-06-22 20:32:32.238978 | orchestrator | | status | ACTIVE | 2025-06-22 20:32:32.238991 | orchestrator | | tags | test | 2025-06-22 20:32:32.239010 | orchestrator | | trusted_image_certificates | None | 2025-06-22 20:32:32.239022 | orchestrator | | updated | 2025-06-22T20:31:12Z | 2025-06-22 20:32:32.239040 | orchestrator | | user_id | f4f981b6fcad42009e71b45074089d83 | 2025-06-22 20:32:32.239053 | orchestrator | | volumes_attached | | 2025-06-22 20:32:32.243403 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-06-22 20:32:32.505308 | orchestrator | + server_ping 2025-06-22 20:32:32.506934 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-06-22 20:32:32.506983 | orchestrator | ++ tr -d '\r' 2025-06-22 20:32:35.491957 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-22 20:32:35.492071 | orchestrator | + ping -c3 192.168.112.180 2025-06-22 20:32:35.504645 | orchestrator | PING 192.168.112.180 (192.168.112.180) 56(84) bytes of data. 2025-06-22 20:32:35.504728 | orchestrator | 64 bytes from 192.168.112.180: icmp_seq=1 ttl=63 time=5.34 ms 2025-06-22 20:32:36.503917 | orchestrator | 64 bytes from 192.168.112.180: icmp_seq=2 ttl=63 time=2.81 ms 2025-06-22 20:32:37.505587 | orchestrator | 64 bytes from 192.168.112.180: icmp_seq=3 ttl=63 time=2.45 ms 2025-06-22 20:32:37.505693 | orchestrator | 2025-06-22 20:32:37.505710 | orchestrator | --- 192.168.112.180 ping statistics --- 2025-06-22 20:32:37.505722 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-06-22 20:32:37.505734 | orchestrator | rtt min/avg/max/mdev = 2.449/3.533/5.340/1.286 ms 2025-06-22 20:32:37.506735 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-22 20:32:37.506767 | orchestrator | + ping -c3 192.168.112.114 2025-06-22 20:32:37.518939 | orchestrator | PING 192.168.112.114 (192.168.112.114) 56(84) bytes of data. 2025-06-22 20:32:37.518973 | orchestrator | 64 bytes from 192.168.112.114: icmp_seq=1 ttl=63 time=7.74 ms 2025-06-22 20:32:38.515873 | orchestrator | 64 bytes from 192.168.112.114: icmp_seq=2 ttl=63 time=2.43 ms 2025-06-22 20:32:39.516332 | orchestrator | 64 bytes from 192.168.112.114: icmp_seq=3 ttl=63 time=1.65 ms 2025-06-22 20:32:39.516456 | orchestrator | 2025-06-22 20:32:39.516473 | orchestrator | --- 192.168.112.114 ping statistics --- 2025-06-22 20:32:39.516537 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-22 20:32:39.516549 | orchestrator | rtt min/avg/max/mdev = 1.653/3.942/7.743/2.706 ms 2025-06-22 20:32:39.517302 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-22 20:32:39.517327 | orchestrator | + ping -c3 192.168.112.193 2025-06-22 20:32:39.527522 | orchestrator | PING 192.168.112.193 (192.168.112.193) 56(84) bytes of data. 2025-06-22 20:32:39.527552 | orchestrator | 64 bytes from 192.168.112.193: icmp_seq=1 ttl=63 time=5.65 ms 2025-06-22 20:32:40.526172 | orchestrator | 64 bytes from 192.168.112.193: icmp_seq=2 ttl=63 time=1.99 ms 2025-06-22 20:32:41.527815 | orchestrator | 64 bytes from 192.168.112.193: icmp_seq=3 ttl=63 time=1.62 ms 2025-06-22 20:32:41.527920 | orchestrator | 2025-06-22 20:32:41.527935 | orchestrator | --- 192.168.112.193 ping statistics --- 2025-06-22 20:32:41.527947 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-22 20:32:41.527958 | orchestrator | rtt min/avg/max/mdev = 1.622/3.088/5.649/1.817 ms 2025-06-22 20:32:41.527969 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-22 20:32:41.527981 | orchestrator | + ping -c3 192.168.112.170 2025-06-22 20:32:41.539468 | orchestrator | PING 192.168.112.170 (192.168.112.170) 56(84) bytes of data. 2025-06-22 20:32:41.539598 | orchestrator | 64 bytes from 192.168.112.170: icmp_seq=1 ttl=63 time=6.95 ms 2025-06-22 20:32:42.536401 | orchestrator | 64 bytes from 192.168.112.170: icmp_seq=2 ttl=63 time=2.53 ms 2025-06-22 20:32:43.538212 | orchestrator | 64 bytes from 192.168.112.170: icmp_seq=3 ttl=63 time=1.59 ms 2025-06-22 20:32:43.538312 | orchestrator | 2025-06-22 20:32:43.538327 | orchestrator | --- 192.168.112.170 ping statistics --- 2025-06-22 20:32:43.538340 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-06-22 20:32:43.538350 | orchestrator | rtt min/avg/max/mdev = 1.591/3.689/6.948/2.335 ms 2025-06-22 20:32:43.538362 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-06-22 20:32:43.538373 | orchestrator | + ping -c3 192.168.112.130 2025-06-22 20:32:43.549790 | orchestrator | PING 192.168.112.130 (192.168.112.130) 56(84) bytes of data. 2025-06-22 20:32:43.549845 | orchestrator | 64 bytes from 192.168.112.130: icmp_seq=1 ttl=63 time=7.35 ms 2025-06-22 20:32:44.546855 | orchestrator | 64 bytes from 192.168.112.130: icmp_seq=2 ttl=63 time=2.13 ms 2025-06-22 20:32:45.547381 | orchestrator | 64 bytes from 192.168.112.130: icmp_seq=3 ttl=63 time=1.91 ms 2025-06-22 20:32:45.547464 | orchestrator | 2025-06-22 20:32:45.547473 | orchestrator | --- 192.168.112.130 ping statistics --- 2025-06-22 20:32:45.547523 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-06-22 20:32:45.547531 | orchestrator | rtt min/avg/max/mdev = 1.914/3.798/7.349/2.512 ms 2025-06-22 20:32:45.548277 | orchestrator | + [[ 9.1.0 == \l\a\t\e\s\t ]] 2025-06-22 20:32:45.639939 | orchestrator | ok: Runtime: 0:09:40.063933 2025-06-22 20:32:45.679411 | 2025-06-22 20:32:45.679572 | TASK [Run tempest] 2025-06-22 20:32:46.214009 | orchestrator | skipping: Conditional result was False 2025-06-22 20:32:46.223853 | 2025-06-22 20:32:46.224009 | TASK [Check prometheus alert status] 2025-06-22 20:32:46.758272 | orchestrator | skipping: Conditional result was False 2025-06-22 20:32:46.761303 | 2025-06-22 20:32:46.761518 | PLAY RECAP 2025-06-22 20:32:46.761673 | orchestrator | ok: 24 changed: 11 unreachable: 0 failed: 0 skipped: 5 rescued: 0 ignored: 0 2025-06-22 20:32:46.761741 | 2025-06-22 20:32:47.003985 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-06-22 20:32:47.007841 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-06-22 20:32:47.852491 | 2025-06-22 20:32:47.852671 | PLAY [Post output play] 2025-06-22 20:32:47.870676 | 2025-06-22 20:32:47.870870 | LOOP [stage-output : Register sources] 2025-06-22 20:32:47.924278 | 2025-06-22 20:32:47.924554 | TASK [stage-output : Check sudo] 2025-06-22 20:32:48.826195 | orchestrator | sudo: a password is required 2025-06-22 20:32:48.962053 | orchestrator | ok: Runtime: 0:00:00.012737 2025-06-22 20:32:48.982261 | 2025-06-22 20:32:48.982593 | LOOP [stage-output : Set source and destination for files and folders] 2025-06-22 20:32:49.017299 | 2025-06-22 20:32:49.017547 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-06-22 20:32:49.078822 | orchestrator | ok 2025-06-22 20:32:49.085999 | 2025-06-22 20:32:49.086158 | LOOP [stage-output : Ensure target folders exist] 2025-06-22 20:32:49.549202 | orchestrator | ok: "docs" 2025-06-22 20:32:49.549475 | 2025-06-22 20:32:49.788623 | orchestrator | ok: "artifacts" 2025-06-22 20:32:50.036912 | orchestrator | ok: "logs" 2025-06-22 20:32:50.051124 | 2025-06-22 20:32:50.051248 | LOOP [stage-output : Copy files and folders to staging folder] 2025-06-22 20:32:50.083028 | 2025-06-22 20:32:50.083233 | TASK [stage-output : Make all log files readable] 2025-06-22 20:32:50.372442 | orchestrator | ok 2025-06-22 20:32:50.393146 | 2025-06-22 20:32:50.393705 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-06-22 20:32:50.464902 | orchestrator | skipping: Conditional result was False 2025-06-22 20:32:50.481627 | 2025-06-22 20:32:50.481918 | TASK [stage-output : Discover log files for compression] 2025-06-22 20:32:50.507706 | orchestrator | skipping: Conditional result was False 2025-06-22 20:32:50.517819 | 2025-06-22 20:32:50.518002 | LOOP [stage-output : Archive everything from logs] 2025-06-22 20:32:50.570746 | 2025-06-22 20:32:50.571046 | PLAY [Post cleanup play] 2025-06-22 20:32:50.581275 | 2025-06-22 20:32:50.581434 | TASK [Set cloud fact (Zuul deployment)] 2025-06-22 20:32:50.639094 | orchestrator | ok 2025-06-22 20:32:50.650444 | 2025-06-22 20:32:50.650567 | TASK [Set cloud fact (local deployment)] 2025-06-22 20:32:50.684813 | orchestrator | skipping: Conditional result was False 2025-06-22 20:32:50.700869 | 2025-06-22 20:32:50.701046 | TASK [Clean the cloud environment] 2025-06-22 20:32:53.200214 | orchestrator | 2025-06-22 20:32:53 - clean up servers 2025-06-22 20:32:53.954555 | orchestrator | 2025-06-22 20:32:53 - testbed-manager 2025-06-22 20:32:54.049946 | orchestrator | 2025-06-22 20:32:54 - testbed-node-5 2025-06-22 20:32:54.142129 | orchestrator | 2025-06-22 20:32:54 - testbed-node-2 2025-06-22 20:32:54.236598 | orchestrator | 2025-06-22 20:32:54 - testbed-node-4 2025-06-22 20:32:54.328301 | orchestrator | 2025-06-22 20:32:54 - testbed-node-0 2025-06-22 20:32:54.427899 | orchestrator | 2025-06-22 20:32:54 - testbed-node-3 2025-06-22 20:32:54.527426 | orchestrator | 2025-06-22 20:32:54 - testbed-node-1 2025-06-22 20:32:54.611149 | orchestrator | 2025-06-22 20:32:54 - clean up keypairs 2025-06-22 20:32:54.624536 | orchestrator | 2025-06-22 20:32:54 - testbed 2025-06-22 20:32:54.644259 | orchestrator | 2025-06-22 20:32:54 - wait for servers to be gone 2025-06-22 20:33:03.411617 | orchestrator | 2025-06-22 20:33:03 - clean up ports 2025-06-22 20:33:03.598634 | orchestrator | 2025-06-22 20:33:03 - 29a8cc1c-77dc-450b-a102-48e06d5c126b 2025-06-22 20:33:03.875965 | orchestrator | 2025-06-22 20:33:03 - 80267aca-d2ca-43c7-ad44-c77ebbd5584a 2025-06-22 20:33:04.139859 | orchestrator | 2025-06-22 20:33:04 - 845fc830-b130-45f8-90f4-2dad714b512f 2025-06-22 20:33:04.351784 | orchestrator | 2025-06-22 20:33:04 - 979d5280-fab9-4b07-bc53-541f45b8120e 2025-06-22 20:33:04.660689 | orchestrator | 2025-06-22 20:33:04 - ddf2ed67-5fe1-40ac-81f5-8dc04b8b1393 2025-06-22 20:33:04.883942 | orchestrator | 2025-06-22 20:33:04 - ef6eae2a-e410-4aaf-a343-ed44e1149d7d 2025-06-22 20:33:05.093547 | orchestrator | 2025-06-22 20:33:05 - fd5b3dfc-5023-49fc-a168-6716d582af0e 2025-06-22 20:33:05.533821 | orchestrator | 2025-06-22 20:33:05 - clean up volumes 2025-06-22 20:33:05.642797 | orchestrator | 2025-06-22 20:33:05 - testbed-volume-5-node-base 2025-06-22 20:33:05.685703 | orchestrator | 2025-06-22 20:33:05 - testbed-volume-4-node-base 2025-06-22 20:33:05.731981 | orchestrator | 2025-06-22 20:33:05 - testbed-volume-2-node-base 2025-06-22 20:33:05.773298 | orchestrator | 2025-06-22 20:33:05 - testbed-volume-0-node-base 2025-06-22 20:33:05.819923 | orchestrator | 2025-06-22 20:33:05 - testbed-volume-3-node-base 2025-06-22 20:33:05.866868 | orchestrator | 2025-06-22 20:33:05 - testbed-volume-1-node-base 2025-06-22 20:33:05.905958 | orchestrator | 2025-06-22 20:33:05 - testbed-volume-manager-base 2025-06-22 20:33:05.947956 | orchestrator | 2025-06-22 20:33:05 - testbed-volume-1-node-4 2025-06-22 20:33:05.990472 | orchestrator | 2025-06-22 20:33:05 - testbed-volume-2-node-5 2025-06-22 20:33:06.033428 | orchestrator | 2025-06-22 20:33:06 - testbed-volume-7-node-4 2025-06-22 20:33:06.080094 | orchestrator | 2025-06-22 20:33:06 - testbed-volume-0-node-3 2025-06-22 20:33:06.123186 | orchestrator | 2025-06-22 20:33:06 - testbed-volume-5-node-5 2025-06-22 20:33:06.165157 | orchestrator | 2025-06-22 20:33:06 - testbed-volume-6-node-3 2025-06-22 20:33:06.211064 | orchestrator | 2025-06-22 20:33:06 - testbed-volume-8-node-5 2025-06-22 20:33:06.254806 | orchestrator | 2025-06-22 20:33:06 - testbed-volume-4-node-4 2025-06-22 20:33:06.302837 | orchestrator | 2025-06-22 20:33:06 - testbed-volume-3-node-3 2025-06-22 20:33:06.343165 | orchestrator | 2025-06-22 20:33:06 - disconnect routers 2025-06-22 20:33:06.487181 | orchestrator | 2025-06-22 20:33:06 - testbed 2025-06-22 20:33:07.948662 | orchestrator | 2025-06-22 20:33:07 - clean up subnets 2025-06-22 20:33:07.990801 | orchestrator | 2025-06-22 20:33:07 - subnet-testbed-management 2025-06-22 20:33:08.194311 | orchestrator | 2025-06-22 20:33:08 - clean up networks 2025-06-22 20:33:08.362160 | orchestrator | 2025-06-22 20:33:08 - net-testbed-management 2025-06-22 20:33:08.656669 | orchestrator | 2025-06-22 20:33:08 - clean up security groups 2025-06-22 20:33:09.126181 | orchestrator | 2025-06-22 20:33:09 - testbed-node 2025-06-22 20:33:09.230295 | orchestrator | 2025-06-22 20:33:09 - testbed-management 2025-06-22 20:33:09.350284 | orchestrator | 2025-06-22 20:33:09 - clean up floating ips 2025-06-22 20:33:09.386248 | orchestrator | 2025-06-22 20:33:09 - 81.163.192.14 2025-06-22 20:33:09.757538 | orchestrator | 2025-06-22 20:33:09 - clean up routers 2025-06-22 20:33:09.816782 | orchestrator | 2025-06-22 20:33:09 - testbed 2025-06-22 20:33:11.266260 | orchestrator | ok: Runtime: 0:00:20.133285 2025-06-22 20:33:11.270646 | 2025-06-22 20:33:11.270793 | PLAY RECAP 2025-06-22 20:33:11.270931 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-06-22 20:33:11.270985 | 2025-06-22 20:33:11.407655 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-06-22 20:33:11.408684 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-06-22 20:33:12.147263 | 2025-06-22 20:33:12.147493 | PLAY [Cleanup play] 2025-06-22 20:33:12.164302 | 2025-06-22 20:33:12.164464 | TASK [Set cloud fact (Zuul deployment)] 2025-06-22 20:33:12.215412 | orchestrator | ok 2025-06-22 20:33:12.225237 | 2025-06-22 20:33:12.225434 | TASK [Set cloud fact (local deployment)] 2025-06-22 20:33:12.250602 | orchestrator | skipping: Conditional result was False 2025-06-22 20:33:12.264654 | 2025-06-22 20:33:12.264801 | TASK [Clean the cloud environment] 2025-06-22 20:33:13.409743 | orchestrator | 2025-06-22 20:33:13 - clean up servers 2025-06-22 20:33:13.894266 | orchestrator | 2025-06-22 20:33:13 - clean up keypairs 2025-06-22 20:33:13.907972 | orchestrator | 2025-06-22 20:33:13 - wait for servers to be gone 2025-06-22 20:33:13.951993 | orchestrator | 2025-06-22 20:33:13 - clean up ports 2025-06-22 20:33:14.028379 | orchestrator | 2025-06-22 20:33:14 - clean up volumes 2025-06-22 20:33:14.102308 | orchestrator | 2025-06-22 20:33:14 - disconnect routers 2025-06-22 20:33:14.123273 | orchestrator | 2025-06-22 20:33:14 - clean up subnets 2025-06-22 20:33:14.147204 | orchestrator | 2025-06-22 20:33:14 - clean up networks 2025-06-22 20:33:14.285013 | orchestrator | 2025-06-22 20:33:14 - clean up security groups 2025-06-22 20:33:14.319564 | orchestrator | 2025-06-22 20:33:14 - clean up floating ips 2025-06-22 20:33:14.344297 | orchestrator | 2025-06-22 20:33:14 - clean up routers 2025-06-22 20:33:14.802713 | orchestrator | ok: Runtime: 0:00:01.343713 2025-06-22 20:33:14.806687 | 2025-06-22 20:33:14.806899 | PLAY RECAP 2025-06-22 20:33:14.807029 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-06-22 20:33:14.807090 | 2025-06-22 20:33:14.944711 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-06-22 20:33:14.947135 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-06-22 20:33:15.721396 | 2025-06-22 20:33:15.721574 | PLAY [Base post-fetch] 2025-06-22 20:33:15.737725 | 2025-06-22 20:33:15.737890 | TASK [fetch-output : Set log path for multiple nodes] 2025-06-22 20:33:15.793750 | orchestrator | skipping: Conditional result was False 2025-06-22 20:33:15.809725 | 2025-06-22 20:33:15.809999 | TASK [fetch-output : Set log path for single node] 2025-06-22 20:33:15.859871 | orchestrator | ok 2025-06-22 20:33:15.868965 | 2025-06-22 20:33:15.869141 | LOOP [fetch-output : Ensure local output dirs] 2025-06-22 20:33:16.419842 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/e73a28ae78f04a178dd960d15158097f/work/logs" 2025-06-22 20:33:16.684056 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/e73a28ae78f04a178dd960d15158097f/work/artifacts" 2025-06-22 20:33:16.975518 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/e73a28ae78f04a178dd960d15158097f/work/docs" 2025-06-22 20:33:16.998708 | 2025-06-22 20:33:16.998894 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-06-22 20:33:17.936263 | orchestrator | changed: .d..t...... ./ 2025-06-22 20:33:17.936633 | orchestrator | changed: All items complete 2025-06-22 20:33:17.936685 | 2025-06-22 20:33:18.678706 | orchestrator | changed: .d..t...... ./ 2025-06-22 20:33:19.430918 | orchestrator | changed: .d..t...... ./ 2025-06-22 20:33:19.456829 | 2025-06-22 20:33:19.456981 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-06-22 20:33:19.498464 | orchestrator | skipping: Conditional result was False 2025-06-22 20:33:19.501282 | orchestrator | skipping: Conditional result was False 2025-06-22 20:33:19.514723 | 2025-06-22 20:33:19.514876 | PLAY RECAP 2025-06-22 20:33:19.514960 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-06-22 20:33:19.514998 | 2025-06-22 20:33:19.641121 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-06-22 20:33:19.642164 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-06-22 20:33:20.417093 | 2025-06-22 20:33:20.417281 | PLAY [Base post] 2025-06-22 20:33:20.433398 | 2025-06-22 20:33:20.433569 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-06-22 20:33:21.434863 | orchestrator | changed 2025-06-22 20:33:21.444987 | 2025-06-22 20:33:21.445121 | PLAY RECAP 2025-06-22 20:33:21.445193 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-06-22 20:33:21.445262 | 2025-06-22 20:33:21.568723 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-06-22 20:33:21.569789 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-06-22 20:33:22.368064 | 2025-06-22 20:33:22.368239 | PLAY [Base post-logs] 2025-06-22 20:33:22.379594 | 2025-06-22 20:33:22.379751 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-06-22 20:33:22.826106 | localhost | changed 2025-06-22 20:33:22.841002 | 2025-06-22 20:33:22.841185 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-06-22 20:33:22.877852 | localhost | ok 2025-06-22 20:33:22.882291 | 2025-06-22 20:33:22.882480 | TASK [Set zuul-log-path fact] 2025-06-22 20:33:22.898346 | localhost | ok 2025-06-22 20:33:22.910029 | 2025-06-22 20:33:22.910161 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-06-22 20:33:22.936002 | localhost | ok 2025-06-22 20:33:22.940125 | 2025-06-22 20:33:22.940249 | TASK [upload-logs : Create log directories] 2025-06-22 20:33:23.452324 | localhost | changed 2025-06-22 20:33:23.459353 | 2025-06-22 20:33:23.459610 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-06-22 20:33:23.960359 | localhost -> localhost | ok: Runtime: 0:00:00.008732 2025-06-22 20:33:23.970008 | 2025-06-22 20:33:23.970207 | TASK [upload-logs : Upload logs to log server] 2025-06-22 20:33:24.601070 | localhost | Output suppressed because no_log was given 2025-06-22 20:33:24.605454 | 2025-06-22 20:33:24.605667 | LOOP [upload-logs : Compress console log and json output] 2025-06-22 20:33:24.668721 | localhost | skipping: Conditional result was False 2025-06-22 20:33:24.673751 | localhost | skipping: Conditional result was False 2025-06-22 20:33:24.686716 | 2025-06-22 20:33:24.686828 | LOOP [upload-logs : Upload compressed console log and json output] 2025-06-22 20:33:24.743216 | localhost | skipping: Conditional result was False 2025-06-22 20:33:24.743960 | 2025-06-22 20:33:24.747120 | localhost | skipping: Conditional result was False 2025-06-22 20:33:24.757587 | 2025-06-22 20:33:24.757703 | LOOP [upload-logs : Upload console log and json output]