2025-05-31 19:13:31.256425 | Job console starting 2025-05-31 19:13:31.282282 | Updating git repos 2025-05-31 19:13:31.354415 | Cloning repos into workspace 2025-05-31 19:13:31.594987 | Restoring repo states 2025-05-31 19:13:31.620671 | Merging changes 2025-05-31 19:13:31.620716 | Checking out repos 2025-05-31 19:13:31.896433 | Preparing playbooks 2025-05-31 19:13:32.575235 | Running Ansible setup 2025-05-31 19:13:36.893541 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-05-31 19:13:37.683234 | 2025-05-31 19:13:37.683419 | PLAY [Base pre] 2025-05-31 19:13:37.701188 | 2025-05-31 19:13:37.701344 | TASK [Setup log path fact] 2025-05-31 19:13:37.733162 | orchestrator | ok 2025-05-31 19:13:37.751748 | 2025-05-31 19:13:37.751964 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-05-31 19:13:37.789075 | orchestrator | ok 2025-05-31 19:13:37.803941 | 2025-05-31 19:13:37.804100 | TASK [emit-job-header : Print job information] 2025-05-31 19:13:37.861792 | # Job Information 2025-05-31 19:13:37.862139 | Ansible Version: 2.16.14 2025-05-31 19:13:37.862198 | Job: testbed-deploy-in-a-nutshell-ubuntu-24.04 2025-05-31 19:13:37.862254 | Pipeline: post 2025-05-31 19:13:37.862293 | Executor: 521e9411259a 2025-05-31 19:13:37.862327 | Triggered by: https://github.com/osism/testbed/commit/eb630747ca6a5192572150bd459686791fac5495 2025-05-31 19:13:37.862364 | Event ID: 52b04958-3e53-11f0-9825-e4f5ed398a52 2025-05-31 19:13:37.871922 | 2025-05-31 19:13:37.872068 | LOOP [emit-job-header : Print node information] 2025-05-31 19:13:38.035169 | orchestrator | ok: 2025-05-31 19:13:38.035485 | orchestrator | # Node Information 2025-05-31 19:13:38.035541 | orchestrator | Inventory Hostname: orchestrator 2025-05-31 19:13:38.035582 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-05-31 19:13:38.035618 | orchestrator | Username: zuul-testbed04 2025-05-31 19:13:38.035652 | orchestrator | Distro: Debian 12.11 2025-05-31 19:13:38.035690 | orchestrator | Provider: static-testbed 2025-05-31 19:13:38.035724 | orchestrator | Region: 2025-05-31 19:13:38.035759 | orchestrator | Label: testbed-orchestrator 2025-05-31 19:13:38.035791 | orchestrator | Product Name: OpenStack Nova 2025-05-31 19:13:38.035824 | orchestrator | Interface IP: 81.163.193.140 2025-05-31 19:13:38.056418 | 2025-05-31 19:13:38.056556 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-05-31 19:13:38.567618 | orchestrator -> localhost | changed 2025-05-31 19:13:38.576619 | 2025-05-31 19:13:38.576770 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-05-31 19:13:39.719210 | orchestrator -> localhost | changed 2025-05-31 19:13:39.745995 | 2025-05-31 19:13:39.746169 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-05-31 19:13:40.059270 | orchestrator -> localhost | ok 2025-05-31 19:13:40.074212 | 2025-05-31 19:13:40.074388 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-05-31 19:13:40.115448 | orchestrator | ok 2025-05-31 19:13:40.139220 | orchestrator | included: /var/lib/zuul/builds/4f3fe0e9d0624a79acaf86d3d81ffecd/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-05-31 19:13:40.147969 | 2025-05-31 19:13:40.148106 | TASK [add-build-sshkey : Create Temp SSH key] 2025-05-31 19:13:43.260786 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-05-31 19:13:43.261061 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/4f3fe0e9d0624a79acaf86d3d81ffecd/work/4f3fe0e9d0624a79acaf86d3d81ffecd_id_rsa 2025-05-31 19:13:43.261103 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/4f3fe0e9d0624a79acaf86d3d81ffecd/work/4f3fe0e9d0624a79acaf86d3d81ffecd_id_rsa.pub 2025-05-31 19:13:43.261130 | orchestrator -> localhost | The key fingerprint is: 2025-05-31 19:13:43.261155 | orchestrator -> localhost | SHA256:xvKES7/rqvPUPTlb3NcM3UjOeI9TwHVn7hicEoseJ+Q zuul-build-sshkey 2025-05-31 19:13:43.261179 | orchestrator -> localhost | The key's randomart image is: 2025-05-31 19:13:43.261213 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-05-31 19:13:43.261235 | orchestrator -> localhost | | . . =| 2025-05-31 19:13:43.261257 | orchestrator -> localhost | | o . = =o| 2025-05-31 19:13:43.261276 | orchestrator -> localhost | | E + B .| 2025-05-31 19:13:43.261296 | orchestrator -> localhost | | o . + * B.| 2025-05-31 19:13:43.261316 | orchestrator -> localhost | | + S . . B =| 2025-05-31 19:13:43.261340 | orchestrator -> localhost | | . O . o o *.| 2025-05-31 19:13:43.261361 | orchestrator -> localhost | | o + = o + =| 2025-05-31 19:13:43.261382 | orchestrator -> localhost | | .. . = o | 2025-05-31 19:13:43.261403 | orchestrator -> localhost | | .+oo+.. | 2025-05-31 19:13:43.261423 | orchestrator -> localhost | +----[SHA256]-----+ 2025-05-31 19:13:43.261488 | orchestrator -> localhost | ok: Runtime: 0:00:02.538748 2025-05-31 19:13:43.269808 | 2025-05-31 19:13:43.269965 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-05-31 19:13:43.320191 | orchestrator | ok 2025-05-31 19:13:43.349135 | orchestrator | included: /var/lib/zuul/builds/4f3fe0e9d0624a79acaf86d3d81ffecd/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-05-31 19:13:43.370416 | 2025-05-31 19:13:43.370751 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-05-31 19:13:43.398354 | orchestrator | skipping: Conditional result was False 2025-05-31 19:13:43.407523 | 2025-05-31 19:13:43.407645 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-05-31 19:13:44.019517 | orchestrator | changed 2025-05-31 19:13:44.027890 | 2025-05-31 19:13:44.028032 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-05-31 19:13:44.364195 | orchestrator | ok 2025-05-31 19:13:44.383619 | 2025-05-31 19:13:44.383978 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-05-31 19:13:44.831288 | orchestrator | ok 2025-05-31 19:13:44.840082 | 2025-05-31 19:13:44.840282 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-05-31 19:13:45.285589 | orchestrator | ok 2025-05-31 19:13:45.298526 | 2025-05-31 19:13:45.298779 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-05-31 19:13:45.324869 | orchestrator | skipping: Conditional result was False 2025-05-31 19:13:45.335412 | 2025-05-31 19:13:45.335630 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-05-31 19:13:45.932290 | orchestrator -> localhost | changed 2025-05-31 19:13:45.952203 | 2025-05-31 19:13:45.952357 | TASK [add-build-sshkey : Add back temp key] 2025-05-31 19:13:46.394671 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/4f3fe0e9d0624a79acaf86d3d81ffecd/work/4f3fe0e9d0624a79acaf86d3d81ffecd_id_rsa (zuul-build-sshkey) 2025-05-31 19:13:46.395105 | orchestrator -> localhost | ok: Runtime: 0:00:00.020548 2025-05-31 19:13:46.404341 | 2025-05-31 19:13:46.404460 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-05-31 19:13:46.852820 | orchestrator | ok 2025-05-31 19:13:46.859543 | 2025-05-31 19:13:46.859662 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-05-31 19:13:46.894622 | orchestrator | skipping: Conditional result was False 2025-05-31 19:13:46.957922 | 2025-05-31 19:13:46.958205 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-05-31 19:13:47.365492 | orchestrator | ok 2025-05-31 19:13:47.379173 | 2025-05-31 19:13:47.379365 | TASK [validate-host : Define zuul_info_dir fact] 2025-05-31 19:13:47.430274 | orchestrator | ok 2025-05-31 19:13:47.442091 | 2025-05-31 19:13:47.442267 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-05-31 19:13:47.741895 | orchestrator -> localhost | ok 2025-05-31 19:13:47.754951 | 2025-05-31 19:13:47.755183 | TASK [validate-host : Collect information about the host] 2025-05-31 19:13:48.988678 | orchestrator | ok 2025-05-31 19:13:49.011541 | 2025-05-31 19:13:49.011694 | TASK [validate-host : Sanitize hostname] 2025-05-31 19:13:49.082640 | orchestrator | ok 2025-05-31 19:13:49.088626 | 2025-05-31 19:13:49.088739 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-05-31 19:13:49.852945 | orchestrator -> localhost | changed 2025-05-31 19:13:49.860563 | 2025-05-31 19:13:49.860713 | TASK [validate-host : Collect information about zuul worker] 2025-05-31 19:13:50.318784 | orchestrator | ok 2025-05-31 19:13:50.327887 | 2025-05-31 19:13:50.328038 | TASK [validate-host : Write out all zuul information for each host] 2025-05-31 19:13:50.972664 | orchestrator -> localhost | changed 2025-05-31 19:13:50.984611 | 2025-05-31 19:13:50.984745 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-05-31 19:13:51.281560 | orchestrator | ok 2025-05-31 19:13:51.294902 | 2025-05-31 19:13:51.295057 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-05-31 19:14:33.079970 | orchestrator | changed: 2025-05-31 19:14:33.080347 | orchestrator | .d..t...... src/ 2025-05-31 19:14:33.080409 | orchestrator | .d..t...... src/github.com/ 2025-05-31 19:14:33.080454 | orchestrator | .d..t...... src/github.com/osism/ 2025-05-31 19:14:33.080492 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-05-31 19:14:33.080529 | orchestrator | RedHat.yml 2025-05-31 19:14:33.094777 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-05-31 19:14:33.094796 | orchestrator | RedHat.yml 2025-05-31 19:14:33.094904 | orchestrator | = 1.53.0"... 2025-05-31 19:14:45.147861 | orchestrator | 19:14:45.147 STDOUT terraform: - Finding hashicorp/local versions matching ">= 2.2.0"... 2025-05-31 19:14:45.235684 | orchestrator | 19:14:45.235 STDOUT terraform: - Finding latest version of hashicorp/null... 2025-05-31 19:14:46.275192 | orchestrator | 19:14:46.274 STDOUT terraform: - Installing hashicorp/local v2.5.3... 2025-05-31 19:14:47.124318 | orchestrator | 19:14:47.124 STDOUT terraform: - Installed hashicorp/local v2.5.3 (signed, key ID 0C0AF313E5FD9F80) 2025-05-31 19:14:48.054784 | orchestrator | 19:14:48.054 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-05-31 19:14:48.927461 | orchestrator | 19:14:48.927 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-05-31 19:14:49.913989 | orchestrator | 19:14:49.913 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.1.0... 2025-05-31 19:14:50.932156 | orchestrator | 19:14:50.931 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.1.0 (signed, key ID 4F80527A391BEFD2) 2025-05-31 19:14:50.932416 | orchestrator | 19:14:50.932 STDOUT terraform: Providers are signed by their developers. 2025-05-31 19:14:50.932428 | orchestrator | 19:14:50.932 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-05-31 19:14:50.932434 | orchestrator | 19:14:50.932 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-05-31 19:14:50.932706 | orchestrator | 19:14:50.932 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-05-31 19:14:50.932723 | orchestrator | 19:14:50.932 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-05-31 19:14:50.932731 | orchestrator | 19:14:50.932 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-05-31 19:14:50.932735 | orchestrator | 19:14:50.932 STDOUT terraform: you run "tofu init" in the future. 2025-05-31 19:14:50.933296 | orchestrator | 19:14:50.933 STDOUT terraform: OpenTofu has been successfully initialized! 2025-05-31 19:14:50.933715 | orchestrator | 19:14:50.933 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-05-31 19:14:50.933734 | orchestrator | 19:14:50.933 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-05-31 19:14:50.933739 | orchestrator | 19:14:50.933 STDOUT terraform: should now work. 2025-05-31 19:14:50.933743 | orchestrator | 19:14:50.933 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-05-31 19:14:50.933747 | orchestrator | 19:14:50.933 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-05-31 19:14:50.933752 | orchestrator | 19:14:50.933 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-05-31 19:14:51.109696 | orchestrator | 19:14:51.109 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed04/terraform` instead. 2025-05-31 19:14:51.303886 | orchestrator | 19:14:51.303 STDOUT terraform: Created and switched to workspace "ci"! 2025-05-31 19:14:51.304014 | orchestrator | 19:14:51.303 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-05-31 19:14:51.304036 | orchestrator | 19:14:51.303 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-05-31 19:14:51.304096 | orchestrator | 19:14:51.304 STDOUT terraform: for this configuration. 2025-05-31 19:14:51.527834 | orchestrator | 19:14:51.527 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed04/terraform` instead. 2025-05-31 19:14:51.646319 | orchestrator | 19:14:51.646 STDOUT terraform: ci.auto.tfvars 2025-05-31 19:14:51.651256 | orchestrator | 19:14:51.651 STDOUT terraform: default_custom.tf 2025-05-31 19:14:51.841405 | orchestrator | 19:14:51.841 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed04/terraform` instead. 2025-05-31 19:14:52.863732 | orchestrator | 19:14:52.863 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-05-31 19:14:53.385898 | orchestrator | 19:14:53.385 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-05-31 19:14:53.599653 | orchestrator | 19:14:53.599 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-05-31 19:14:53.599751 | orchestrator | 19:14:53.599 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-05-31 19:14:53.599763 | orchestrator | 19:14:53.599 STDOUT terraform:  + create 2025-05-31 19:14:53.599772 | orchestrator | 19:14:53.599 STDOUT terraform:  <= read (data resources) 2025-05-31 19:14:53.599834 | orchestrator | 19:14:53.599 STDOUT terraform: OpenTofu will perform the following actions: 2025-05-31 19:14:53.599948 | orchestrator | 19:14:53.599 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-05-31 19:14:53.600006 | orchestrator | 19:14:53.599 STDOUT terraform:  # (config refers to values not yet known) 2025-05-31 19:14:53.600081 | orchestrator | 19:14:53.599 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-05-31 19:14:53.600137 | orchestrator | 19:14:53.600 STDOUT terraform:  + checksum = (known after apply) 2025-05-31 19:14:53.600196 | orchestrator | 19:14:53.600 STDOUT terraform:  + created_at = (known after apply) 2025-05-31 19:14:53.600255 | orchestrator | 19:14:53.600 STDOUT terraform:  + file = (known after apply) 2025-05-31 19:14:53.600318 | orchestrator | 19:14:53.600 STDOUT terraform:  + id = (known after apply) 2025-05-31 19:14:53.600380 | orchestrator | 19:14:53.600 STDOUT terraform:  + metadata = (known after apply) 2025-05-31 19:14:53.600448 | orchestrator | 19:14:53.600 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-05-31 19:14:53.600507 | orchestrator | 19:14:53.600 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-05-31 19:14:53.600547 | orchestrator | 19:14:53.600 STDOUT terraform:  + most_recent = true 2025-05-31 19:14:53.600624 | orchestrator | 19:14:53.600 STDOUT terraform:  + name = (known after apply) 2025-05-31 19:14:53.600756 | orchestrator | 19:14:53.600 STDOUT terraform:  + protected = (known after apply) 2025-05-31 19:14:53.600800 | orchestrator | 19:14:53.600 STDOUT terraform:  + region = (known after apply) 2025-05-31 19:14:53.600858 | orchestrator | 19:14:53.600 STDOUT terraform:  + schema = (known after apply) 2025-05-31 19:14:53.600918 | orchestrator | 19:14:53.600 STDOUT terraform:  + size_bytes = (known after apply) 2025-05-31 19:14:53.600979 | orchestrator | 19:14:53.600 STDOUT terraform:  + tags = (known after apply) 2025-05-31 19:14:53.601049 | orchestrator | 19:14:53.600 STDOUT terraform:  + updated_at = (known after apply) 2025-05-31 19:14:53.601077 | orchestrator | 19:14:53.601 STDOUT terraform:  } 2025-05-31 19:14:53.601191 | orchestrator | 19:14:53.601 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-05-31 19:14:53.601260 | orchestrator | 19:14:53.601 STDOUT terraform:  # (config refers to values not yet known) 2025-05-31 19:14:53.601348 | orchestrator | 19:14:53.601 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-05-31 19:14:53.601416 | orchestrator | 19:14:53.601 STDOUT terraform:  + checksum = (known after apply) 2025-05-31 19:14:53.601485 | orchestrator | 19:14:53.601 STDOUT terraform:  + created_at = (known after apply) 2025-05-31 19:14:53.601556 | orchestrator | 19:14:53.601 STDOUT terraform:  + file = (known after apply) 2025-05-31 19:14:53.601667 | orchestrator | 19:14:53.601 STDOUT terraform:  + id = (known after apply) 2025-05-31 19:14:53.601724 | orchestrator | 19:14:53.601 STDOUT terraform:  + metadata = (known after apply) 2025-05-31 19:14:53.601794 | orchestrator | 19:14:53.601 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-05-31 19:14:53.601863 | orchestrator | 19:14:53.601 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-05-31 19:14:53.601909 | orchestrator | 19:14:53.601 STDOUT terraform:  + most_recent = true 2025-05-31 19:14:53.601982 | orchestrator | 19:14:53.601 STDOUT terraform:  + name = (known after apply) 2025-05-31 19:14:53.602086 | orchestrator | 19:14:53.601 STDOUT terraform:  + protected = (known after apply) 2025-05-31 19:14:53.602157 | orchestrator | 19:14:53.602 STDOUT terraform:  + region = (known after apply) 2025-05-31 19:14:53.602233 | orchestrator | 19:14:53.602 STDOUT terraform:  + schema = (known after apply) 2025-05-31 19:14:53.602314 | orchestrator | 19:14:53.602 STDOUT terraform:  + size_bytes = (known after apply) 2025-05-31 19:14:53.602376 | orchestrator | 19:14:53.602 STDOUT terraform:  + tags = (known after apply) 2025-05-31 19:14:53.602446 | orchestrator | 19:14:53.602 STDOUT terraform:  + updated_at = (known after apply) 2025-05-31 19:14:53.602473 | orchestrator | 19:14:53.602 STDOUT terraform:  } 2025-05-31 19:14:53.602550 | orchestrator | 19:14:53.602 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-05-31 19:14:53.602744 | orchestrator | 19:14:53.602 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-05-31 19:14:53.602848 | orchestrator | 19:14:53.602 STDOUT terraform:  + content = (known after apply) 2025-05-31 19:14:53.602937 | orchestrator | 19:14:53.602 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-31 19:14:53.603027 | orchestrator | 19:14:53.602 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-31 19:14:53.603127 | orchestrator | 19:14:53.603 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-31 19:14:53.603213 | orchestrator | 19:14:53.603 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-31 19:14:53.603302 | orchestrator | 19:14:53.603 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-31 19:14:53.603390 | orchestrator | 19:14:53.603 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-31 19:14:53.603454 | orchestrator | 19:14:53.603 STDOUT terraform:  + directory_permission = "0777" 2025-05-31 19:14:53.603518 | orchestrator | 19:14:53.603 STDOUT terraform:  + file_permission = "0644" 2025-05-31 19:14:53.603668 | orchestrator | 19:14:53.603 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-05-31 19:14:53.603795 | orchestrator | 19:14:53.603 STDOUT terraform:  + id = (known after apply) 2025-05-31 19:14:53.603838 | orchestrator | 19:14:53.603 STDOUT terraform:  } 2025-05-31 19:14:53.603914 | orchestrator | 19:14:53.603 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-05-31 19:14:53.603979 | orchestrator | 19:14:53.603 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-05-31 19:14:53.604070 | orchestrator | 19:14:53.603 STDOUT terraform:  + content = (known after apply) 2025-05-31 19:14:53.604158 | orchestrator | 19:14:53.604 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-31 19:14:53.604245 | orchestrator | 19:14:53.604 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-31 19:14:53.604333 | orchestrator | 19:14:53.604 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-31 19:14:53.604420 | orchestrator | 19:14:53.604 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-31 19:14:53.604508 | orchestrator | 19:14:53.604 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-31 19:14:53.604655 | orchestrator | 19:14:53.604 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-31 19:14:53.604728 | orchestrator | 19:14:53.604 STDOUT terraform:  + directory_permission = "0777" 2025-05-31 19:14:53.604792 | orchestrator | 19:14:53.604 STDOUT terraform:  + file_permission = "0644" 2025-05-31 19:14:53.604872 | orchestrator | 19:14:53.604 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-05-31 19:14:53.604964 | orchestrator | 19:14:53.604 STDOUT terraform:  + id = (known after apply) 2025-05-31 19:14:53.604999 | orchestrator | 19:14:53.604 STDOUT terraform:  } 2025-05-31 19:14:53.605060 | orchestrator | 19:14:53.604 STDOUT terraform:  # local_file.inventory will be created 2025-05-31 19:14:53.605120 | orchestrator | 19:14:53.605 STDOUT terraform:  + resource "local_file" "inventory" { 2025-05-31 19:14:53.605210 | orchestrator | 19:14:53.605 STDOUT terraform:  + content = (known after apply) 2025-05-31 19:14:53.605296 | orchestrator | 19:14:53.605 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-31 19:14:53.605381 | orchestrator | 19:14:53.605 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-31 19:14:53.605470 | orchestrator | 19:14:53.605 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-31 19:14:53.605561 | orchestrator | 19:14:53.605 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-31 19:14:53.605670 | orchestrator | 19:14:53.605 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-31 19:14:53.605757 | orchestrator | 19:14:53.605 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-31 19:14:53.605818 | orchestrator | 19:14:53.605 STDOUT terraform:  + directory_permission = "0777" 2025-05-31 19:14:53.605876 | orchestrator | 19:14:53.605 STDOUT terraform:  + file_permission = "0644" 2025-05-31 19:14:53.605957 | orchestrator | 19:14:53.605 STDOUT terraform:  + filename = "inventory.ci" 2025-05-31 19:14:53.606067 | orchestrator | 19:14:53.605 STDOUT terraform:  + id = (known after apply) 2025-05-31 19:14:53.606099 | orchestrator | 19:14:53.606 STDOUT terraform:  } 2025-05-31 19:14:53.606278 | orchestrator | 19:14:53.606 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-05-31 19:14:53.606352 | orchestrator | 19:14:53.606 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-05-31 19:14:53.606430 | orchestrator | 19:14:53.606 STDOUT terraform:  + content = (sensitive value) 2025-05-31 19:14:53.606515 | orchestrator | 19:14:53.606 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-31 19:14:53.606756 | orchestrator | 19:14:53.606 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-31 19:14:53.606847 | orchestrator | 19:14:53.606 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-31 19:14:53.606860 | orchestrator | 19:14:53.606 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-31 19:14:53.606871 | orchestrator | 19:14:53.606 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-31 19:14:53.606960 | orchestrator | 19:14:53.606 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-31 19:14:53.607013 | orchestrator | 19:14:53.606 STDOUT terraform:  + directory_permission = "0700" 2025-05-31 19:14:53.607072 | orchestrator | 19:14:53.607 STDOUT terraform:  + file_permission = "0600" 2025-05-31 19:14:53.607147 | orchestrator | 19:14:53.607 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-05-31 19:14:53.607237 | orchestrator | 19:14:53.607 STDOUT terraform:  + id = (known after apply) 2025-05-31 19:14:53.607270 | orchestrator | 19:14:53.607 STDOUT terraform:  } 2025-05-31 19:14:53.607342 | orchestrator | 19:14:53.607 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-05-31 19:14:53.607415 | orchestrator | 19:14:53.607 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-05-31 19:14:53.607465 | orchestrator | 19:14:53.607 STDOUT terraform:  + id = (known after apply) 2025-05-31 19:14:53.607498 | orchestrator | 19:14:53.607 STDOUT terraform:  } 2025-05-31 19:14:53.607677 | orchestrator | 19:14:53.607 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-05-31 19:14:53.607794 | orchestrator | 19:14:53.607 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-05-31 19:14:53.607884 | orchestrator | 19:14:53.607 STDOUT terraform:  + attachment = (known after apply) 2025-05-31 19:14:53.607945 | orchestrator | 19:14:53.607 STDOUT terraform:  + availability_zone = "nova" 2025-05-31 19:14:53.608034 | orchestrator | 19:14:53.607 STDOUT terraform:  + id = (known after apply) 2025-05-31 19:14:53.608122 | orchestrator | 19:14:53.608 STDOUT terraform:  + image_id = (known after apply) 2025-05-31 19:14:53.608207 | orchestrator | 19:14:53.608 STDOUT terraform:  + metadata = (known after apply) 2025-05-31 19:14:53.608297 | orchestrator | 19:14:53.608 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-05-31 19:14:53.608382 | orchestrator | 19:14:53.608 STDOUT terraform:  + region = (known after apply) 2025-05-31 19:14:53.608418 | orchestrator | 19:14:53.608 STDOUT terraform:  + size = 80 2025-05-31 19:14:53.608473 | orchestrator | 19:14:53.608 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-31 19:14:53.608524 | orchestrator | 19:14:53.608 STDOUT terraform:  + volume_type = "ssd" 2025-05-31 19:14:53.608553 | orchestrator | 19:14:53.608 STDOUT terraform:  } 2025-05-31 19:14:53.608696 | orchestrator | 19:14:53.608 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-05-31 19:14:53.608790 | orchestrator | 19:14:53.608 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-31 19:14:53.608863 | orchestrator | 19:14:53.608 STDOUT terraform:  + attachment = (known after apply) 2025-05-31 19:14:53.608913 | orchestrator | 19:14:53.608 STDOUT terraform:  + availability_zone = "nova" 2025-05-31 19:14:53.608988 | orchestrator | 19:14:53.608 STDOUT terraform:  + id = (known after apply) 2025-05-31 19:14:53.609063 | orchestrator | 19:14:53.608 STDOUT terraform:  + image_id = (known after apply) 2025-05-31 19:14:53.609139 | orchestrator | 19:14:53.609 STDOUT terraform:  + metadata = (known after apply) 2025-05-31 19:14:53.609229 | orchestrator | 19:14:53.609 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-05-31 19:14:53.609302 | orchestrator | 19:14:53.609 STDOUT terraform:  + region = (known after apply) 2025-05-31 19:14:53.609345 | orchestrator | 19:14:53.609 STDOUT terraform:  + size = 80 2025-05-31 19:14:53.609394 | orchestrator | 19:14:53.609 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-31 19:14:53.609443 | orchestrator | 19:14:53.609 STDOUT terraform:  + volume_type = "ssd" 2025-05-31 19:14:53.609474 | orchestrator | 19:14:53.609 STDOUT terraform:  } 2025-05-31 19:14:53.609617 | orchestrator | 19:14:53.609 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-05-31 19:14:53.609726 | orchestrator | 19:14:53.609 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-31 19:14:53.609819 | orchestrator | 19:14:53.609 STDOUT terraform:  + attachment = (known after apply) 2025-05-31 19:14:53.609865 | orchestrator | 19:14:53.609 STDOUT terraform:  + availability_zone = "nova" 2025-05-31 19:14:53.609946 | orchestrator | 19:14:53.609 STDOUT terraform:  + id = (known after apply) 2025-05-31 19:14:53.610091 | orchestrator | 19:14:53.609 STDOUT terraform:  + image_id = (known after apply) 2025-05-31 19:14:53.610119 | orchestrator | 19:14:53.610 STDOUT terraform:  + metadata = (known after apply) 2025-05-31 19:14:53.610217 | orchestrator | 19:14:53.610 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-05-31 19:14:53.610289 | orchestrator | 19:14:53.610 STDOUT terraform:  + region = (known after apply) 2025-05-31 19:14:53.610334 | orchestrator | 19:14:53.610 STDOUT terraform:  + size = 80 2025-05-31 19:14:53.610414 | orchestrator | 19:14:53.610 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-31 19:14:53.610465 | orchestrator | 19:14:53.610 STDOUT terraform:  + volume_type = "ssd" 2025-05-31 19:14:53.610478 | orchestrator | 19:14:53.610 STDOUT terraform:  } 2025-05-31 19:14:53.610596 | orchestrator | 19:14:53.610 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-05-31 19:14:53.610753 | orchestrator | 19:14:53.610 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-31 19:14:53.610828 | orchestrator | 19:14:53.610 STDOUT terraform:  + attachment = (known after apply) 2025-05-31 19:14:53.610880 | orchestrator | 19:14:53.610 STDOUT terraform:  + availability_zone = "nova" 2025-05-31 19:14:53.610977 | orchestrator | 19:14:53.610 STDOUT terraform:  + id = (known after apply) 2025-05-31 19:14:53.611076 | orchestrator | 19:14:53.610 STDOUT terraform:  + image_id = (known after apply) 2025-05-31 19:14:53.611140 | orchestrator | 19:14:53.611 STDOUT terraform:  + metadata = (known after apply) 2025-05-31 19:14:53.611218 | orchestrator | 19:14:53.611 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-05-31 19:14:53.611284 | orchestrator | 19:14:53.611 STDOUT terraform:  + region = (known after apply) 2025-05-31 19:14:53.611319 | orchestrator | 19:14:53.611 STDOUT terraform:  + size = 80 2025-05-31 19:14:53.611354 | orchestrator | 19:14:53.611 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-31 19:14:53.611390 | orchestrator | 19:14:53.611 STDOUT terraform:  + volume_type = "ssd" 2025-05-31 19:14:53.611402 | orchestrator | 19:14:53.611 STDOUT terraform:  } 2025-05-31 19:14:53.611503 | orchestrator | 19:14:53.611 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-05-31 19:14:53.611635 | orchestrator | 19:14:53.611 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-31 19:14:53.611650 | orchestrator | 19:14:53.611 STDOUT terraform:  + attachment = (known after apply) 2025-05-31 19:14:53.611700 | orchestrator | 19:14:53.611 STDOUT terraform:  + availability_zone = "nova" 2025-05-31 19:14:53.611761 | orchestrator | 19:14:53.611 STDOUT terraform:  + id = (known after apply) 2025-05-31 19:14:53.611823 | orchestrator | 19:14:53.611 STDOUT terraform:  + image_id = (known after apply) 2025-05-31 19:14:53.611884 | orchestrator | 19:14:53.611 STDOUT terraform:  + metadata = (known after apply) 2025-05-31 19:14:53.611984 | orchestrator | 19:14:53.611 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-05-31 19:14:53.612053 | orchestrator | 19:14:53.611 STDOUT terraform:  + region = (known after apply) 2025-05-31 19:14:53.612099 | orchestrator | 19:14:53.612 STDOUT terraform:  + size = 80 2025-05-31 19:14:53.612143 | orchestrator | 19:14:53.612 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-31 19:14:53.612187 | orchestrator | 19:14:53.612 STDOUT terraform:  + volume_type = "ssd" 2025-05-31 19:14:53.612200 | orchestrator | 19:14:53.612 STDOUT terraform:  } 2025-05-31 19:14:53.612289 | orchestrator | 19:14:53.612 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-05-31 19:14:53.612385 | orchestrator | 19:14:53.612 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-31 19:14:53.612448 | orchestrator | 19:14:53.612 STDOUT terraform:  + attachment = (known after apply) 2025-05-31 19:14:53.612483 | orchestrator | 19:14:53.612 STDOUT terraform:  + availability_zone = "nova" 2025-05-31 19:14:53.612548 | orchestrator | 19:14:53.612 STDOUT terraform:  + id = (known after apply) 2025-05-31 19:14:53.612636 | orchestrator | 19:14:53.612 STDOUT terraform:  + image_id = (known after apply) 2025-05-31 19:14:53.612711 | orchestrator | 19:14:53.612 STDOUT terraform:  + metadata = (known after apply) 2025-05-31 19:14:53.612781 | orchestrator | 19:14:53.612 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-05-31 19:14:53.612856 | orchestrator | 19:14:53.612 STDOUT terraform:  + region = (known after apply) 2025-05-31 19:14:53.612892 | orchestrator | 19:14:53.612 STDOUT terraform:  + size = 80 2025-05-31 19:14:53.612941 | orchestrator | 19:14:53.612 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-31 19:14:53.612954 | orchestrator | 19:14:53.612 STDOUT terraform:  + volume_type = "ssd" 2025-05-31 19:14:53.612965 | orchestrator | 19:14:53.612 STDOUT terraform:  } 2025-05-31 19:14:53.613065 | orchestrator | 19:14:53.612 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-05-31 19:14:53.613143 | orchestrator | 19:14:53.613 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-31 19:14:53.613205 | orchestrator | 19:14:53.613 STDOUT terraform:  + attachment = (known after apply) 2025-05-31 19:14:53.613246 | orchestrator | 19:14:53.613 STDOUT terraform:  + availability_zone = "nova" 2025-05-31 19:14:53.613318 | orchestrator | 19:14:53.613 STDOUT terraform:  + id = (known after apply) 2025-05-31 19:14:53.613345 | orchestrator | 19:14:53.613 STDOUT terraform:  + image_id = (known after apply) 2025-05-31 19:14:53.613427 | orchestrator | 19:14:53.613 STDOUT terraform:  + metadata = (known after apply) 2025-05-31 19:14:53.613501 | orchestrator | 19:14:53.613 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-05-31 19:14:53.613562 | orchestrator | 19:14:53.613 STDOUT terraform:  + region = (known after apply) 2025-05-31 19:14:53.613593 | orchestrator | 19:14:53.613 STDOUT terraform:  + size = 80 2025-05-31 19:14:53.613669 | orchestrator | 19:14:53.613 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-31 19:14:53.613720 | orchestrator | 19:14:53.613 STDOUT terraform:  + volume_type = "ssd" 2025-05-31 19:14:53.613733 | orchestrator | 19:14:53.613 STDOUT terraform:  } 2025-05-31 19:14:53.613807 | orchestrator | 19:14:53.613 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-05-31 19:14:53.613884 | orchestrator | 19:14:53.613 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-31 19:14:53.613945 | orchestrator | 19:14:53.613 STDOUT terraform:  + attachment = (known after apply) 2025-05-31 19:14:53.613996 | orchestrator | 19:14:53.613 STDOUT terraform:  + availability_zone = "nova" 2025-05-31 19:14:53.614079 | orchestrator | 19:14:53.613 STDOUT terraform:  + id = (known after apply) 2025-05-31 19:14:53.614131 | orchestrator | 19:14:53.614 STDOUT terraform:  + metadata = (known after apply) 2025-05-31 19:14:53.614195 | orchestrator | 19:14:53.614 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-05-31 19:14:53.614254 | orchestrator | 19:14:53.614 STDOUT terraform:  + region = (known after apply) 2025-05-31 19:14:53.614289 | orchestrator | 19:14:53.614 STDOUT terraform:  + size = 20 2025-05-31 19:14:53.614329 | orchestrator | 19:14:53.614 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-31 19:14:53.614368 | orchestrator | 19:14:53.614 STDOUT terraform:  + volume_type = "ssd" 2025-05-31 19:14:53.614391 | orchestrator | 19:14:53.614 STDOUT terraform:  } 2025-05-31 19:14:53.614456 | orchestrator | 19:14:53.614 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-05-31 19:14:53.614532 | orchestrator | 19:14:53.614 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-31 19:14:53.614644 | orchestrator | 19:14:53.614 STDOUT terraform:  + attachment = (known after apply) 2025-05-31 19:14:53.614659 | orchestrator | 19:14:53.614 STDOUT terraform:  + availability_zone = "nova" 2025-05-31 19:14:53.614727 | orchestrator | 19:14:53.614 STDOUT terraform:  + id = (known after apply) 2025-05-31 19:14:53.614782 | orchestrator | 19:14:53.614 STDOUT terraform:  + metadata = (known after apply) 2025-05-31 19:14:53.614845 | orchestrator | 19:14:53.614 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-05-31 19:14:53.614903 | orchestrator | 19:14:53.614 STDOUT terraform:  + region = (known after apply) 2025-05-31 19:14:53.614937 | orchestrator | 19:14:53.614 STDOUT terraform:  + size = 20 2025-05-31 19:14:53.614976 | orchestrator | 19:14:53.614 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-31 19:14:53.615019 | orchestrator | 19:14:53.614 STDOUT terraform:  + volume_type = "ssd" 2025-05-31 19:14:53.615031 | orchestrator | 19:14:53.615 STDOUT terraform:  } 2025-05-31 19:14:53.615106 | orchestrator | 19:14:53.615 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-05-31 19:14:53.615177 | orchestrator | 19:14:53.615 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-31 19:14:53.615234 | orchestrator | 19:14:53.615 STDOUT terraform:  + attachment = (known after apply) 2025-05-31 19:14:53.615273 | orchestrator | 19:14:53.615 STDOUT terraform:  + availability_zone = "nova" 2025-05-31 19:14:53.615333 | orchestrator | 19:14:53.615 STDOUT terraform:  + id = (known after apply) 2025-05-31 19:14:53.615396 | orchestrator | 19:14:53.615 STDOUT terraform:  + metadata = (known after apply) 2025-05-31 19:14:53.615460 | orchestrator | 19:14:53.615 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-05-31 19:14:53.615517 | orchestrator | 19:14:53.615 STDOUT terraform:  + region = (known after apply) 2025-05-31 19:14:53.615552 | orchestrator | 19:14:53.615 STDOUT terraform:  + size = 20 2025-05-31 19:14:53.615616 | orchestrator | 19:14:53.615 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-31 19:14:53.615661 | orchestrator | 19:14:53.615 STDOUT terraform:  + volume_type = "ssd" 2025-05-31 19:14:53.615673 | orchestrator | 19:14:53.615 STDOUT terraform:  } 2025-05-31 19:14:53.615749 | orchestrator | 19:14:53.615 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-05-31 19:14:53.615837 | orchestrator | 19:14:53.615 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-31 19:14:53.615896 | orchestrator | 19:14:53.615 STDOUT terraform:  + attachment = (known after apply) 2025-05-31 19:14:53.615933 | orchestrator | 19:14:53.615 STDOUT terraform:  + availability_zone = "nova" 2025-05-31 19:14:53.615985 | orchestrator | 19:14:53.615 STDOUT terraform:  + id = (known after apply) 2025-05-31 19:14:53.616041 | orchestrator | 19:14:53.615 STDOUT terraform:  + metadata = (known after apply) 2025-05-31 19:14:53.616096 | orchestrator | 19:14:53.616 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-05-31 19:14:53.616147 | orchestrator | 19:14:53.616 STDOUT terraform:  + region = (known after apply) 2025-05-31 19:14:53.616177 | orchestrator | 19:14:53.616 STDOUT terraform:  + size = 20 2025-05-31 19:14:53.616220 | orchestrator | 19:14:53.616 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-31 19:14:53.616247 | orchestrator | 19:14:53.616 STDOUT terraform:  + volume_type = "ssd" 2025-05-31 19:14:53.616259 | orchestrator | 19:14:53.616 STDOUT terraform:  } 2025-05-31 19:14:53.616326 | orchestrator | 19:14:53.616 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-05-31 19:14:53.616387 | orchestrator | 19:14:53.616 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-31 19:14:53.616462 | orchestrator | 19:14:53.616 STDOUT terraform:  + attachment = (known after apply) 2025-05-31 19:14:53.616498 | orchestrator | 19:14:53.616 STDOUT terraform:  + availability_zone = "nova" 2025-05-31 19:14:53.616550 | orchestrator | 19:14:53.616 STDOUT terraform:  + id = (known after apply) 2025-05-31 19:14:53.616623 | orchestrator | 19:14:53.616 STDOUT terraform:  + metadata = (known after apply) 2025-05-31 19:14:53.616684 | orchestrator | 19:14:53.616 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-05-31 19:14:53.616737 | orchestrator | 19:14:53.616 STDOUT terraform:  + region = (known after apply) 2025-05-31 19:14:53.616767 | orchestrator | 19:14:53.616 STDOUT terraform:  + size = 20 2025-05-31 19:14:53.616803 | orchestrator | 19:14:53.616 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-31 19:14:53.616838 | orchestrator | 19:14:53.616 STDOUT terraform:  + volume_type = "ssd" 2025-05-31 19:14:53.616857 | orchestrator | 19:14:53.616 STDOUT terraform:  } 2025-05-31 19:14:53.616916 | orchestrator | 19:14:53.616 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-05-31 19:14:53.616979 | orchestrator | 19:14:53.616 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-31 19:14:53.617029 | orchestrator | 19:14:53.616 STDOUT terraform:  + attachment = (known after apply) 2025-05-31 19:14:53.617064 | orchestrator | 19:14:53.617 STDOUT terraform:  + availability_zone = "nova" 2025-05-31 19:14:53.617116 | orchestrator | 19:14:53.617 STDOUT terraform:  + id = (known after apply) 2025-05-31 19:14:53.617165 | orchestrator | 19:14:53.617 STDOUT terraform:  + metadata = (known after apply) 2025-05-31 19:14:53.617222 | orchestrator | 19:14:53.617 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-05-31 19:14:53.617273 | orchestrator | 19:14:53.617 STDOUT terraform:  + region = (known after apply) 2025-05-31 19:14:53.617304 | orchestrator | 19:14:53.617 STDOUT terraform:  + size = 20 2025-05-31 19:14:53.617339 | orchestrator | 19:14:53.617 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-31 19:14:53.617372 | orchestrator | 19:14:53.617 STDOUT terraform:  + volume_type = "ssd" 2025-05-31 19:14:53.617384 | orchestrator | 19:14:53.617 STDOUT terraform:  } 2025-05-31 19:14:53.617452 | orchestrator | 19:14:53.617 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-05-31 19:14:53.617513 | orchestrator | 19:14:53.617 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-31 19:14:53.617563 | orchestrator | 19:14:53.617 STDOUT terraform:  + attachment = (known after apply) 2025-05-31 19:14:53.617631 | orchestrator | 19:14:53.617 STDOUT terraform:  + availability_zone = "nova" 2025-05-31 19:14:53.617664 | orchestrator | 19:14:53.617 STDOUT terraform:  + id = (known after apply) 2025-05-31 19:14:53.617714 | orchestrator | 19:14:53.617 STDOUT terraform:  + metadata = (known after apply) 2025-05-31 19:14:53.617768 | orchestrator | 19:14:53.617 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-05-31 19:14:53.617820 | orchestrator | 19:14:53.617 STDOUT terraform:  + region = (known after apply) 2025-05-31 19:14:53.617849 | orchestrator | 19:14:53.617 STDOUT terraform:  + size = 20 2025-05-31 19:14:53.617890 | orchestrator | 19:14:53.617 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-31 19:14:53.617919 | orchestrator | 19:14:53.617 STDOUT terraform:  + volume_type = "ssd" 2025-05-31 19:14:53.617929 | orchestrator | 19:14:53.617 STDOUT terraform:  } 2025-05-31 19:14:53.617996 | orchestrator | 19:14:53.617 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-05-31 19:14:53.618075 | orchestrator | 19:14:53.617 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-31 19:14:53.618126 | orchestrator | 19:14:53.618 STDOUT terraform:  + attachment = (known after apply) 2025-05-31 19:14:53.618161 | orchestrator | 19:14:53.618 STDOUT terraform:  + availability_zone = "nova" 2025-05-31 19:14:53.618212 | orchestrator | 19:14:53.618 STDOUT terraform:  + id = (known after apply) 2025-05-31 19:14:53.618264 | orchestrator | 19:14:53.618 STDOUT terraform:  + metadata = (known after apply) 2025-05-31 19:14:53.618320 | orchestrator | 19:14:53.618 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-05-31 19:14:53.618371 | orchestrator | 19:14:53.618 STDOUT terraform:  + region = (known after apply) 2025-05-31 19:14:53.618401 | orchestrator | 19:14:53.618 STDOUT terraform:  + size = 20 2025-05-31 19:14:53.618435 | orchestrator | 19:14:53.618 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-31 19:14:53.618471 | orchestrator | 19:14:53.618 STDOUT terraform:  + volume_type = "ssd" 2025-05-31 19:14:53.618481 | orchestrator | 19:14:53.618 STDOUT terraform:  } 2025-05-31 19:14:53.618547 | orchestrator | 19:14:53.618 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-05-31 19:14:53.618624 | orchestrator | 19:14:53.618 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-31 19:14:53.618679 | orchestrator | 19:14:53.618 STDOUT terraform:  + attachment = (known after apply) 2025-05-31 19:14:53.618713 | orchestrator | 19:14:53.618 STDOUT terraform:  + availability_zone = "nova" 2025-05-31 19:14:53.618762 | orchestrator | 19:14:53.618 STDOUT terraform:  + id = (known after apply) 2025-05-31 19:14:53.618812 | orchestrator | 19:14:53.618 STDOUT terraform:  + metadata = (known after apply) 2025-05-31 19:14:53.618867 | orchestrator | 19:14:53.618 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-05-31 19:14:53.618919 | orchestrator | 19:14:53.618 STDOUT terraform:  + region = (known after apply) 2025-05-31 19:14:53.618948 | orchestrator | 19:14:53.618 STDOUT terraform:  + size = 20 2025-05-31 19:14:53.618983 | orchestrator | 19:14:53.618 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-31 19:14:53.619018 | orchestrator | 19:14:53.618 STDOUT terraform:  + volume_type = "ssd" 2025-05-31 19:14:53.619028 | orchestrator | 19:14:53.619 STDOUT terraform:  } 2025-05-31 19:14:53.619095 | orchestrator | 19:14:53.619 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-05-31 19:14:53.619156 | orchestrator | 19:14:53.619 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-05-31 19:14:53.619206 | orchestrator | 19:14:53.619 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-31 19:14:53.619256 | orchestrator | 19:14:53.619 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-31 19:14:53.619304 | orchestrator | 19:14:53.619 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-31 19:14:53.619355 | orchestrator | 19:14:53.619 STDOUT terraform:  + all_tags = (known after apply) 2025-05-31 19:14:53.619389 | orchestrator | 19:14:53.619 STDOUT terraform:  + availability_zone = "nova" 2025-05-31 19:14:53.619419 | orchestrator | 19:14:53.619 STDOUT terraform:  + config_drive = true 2025-05-31 19:14:53.619469 | orchestrator | 19:14:53.619 STDOUT terraform:  + created = (known after apply) 2025-05-31 19:14:53.619520 | orchestrator | 19:14:53.619 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-31 19:14:53.619562 | orchestrator | 19:14:53.619 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-05-31 19:14:53.619631 | orchestrator | 19:14:53.619 STDOUT terraform:  + force_delete = false 2025-05-31 19:14:53.619678 | orchestrator | 19:14:53.619 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-05-31 19:14:53.619726 | orchestrator | 19:14:53.619 STDOUT terraform:  + id = (known after apply) 2025-05-31 19:14:53.619784 | orchestrator | 19:14:53.619 STDOUT terraform:  + image_id = (known after apply) 2025-05-31 19:14:53.619827 | orchestrator | 19:14:53.619 STDOUT terraform:  + image_name = (known after apply) 2025-05-31 19:14:53.619860 | orchestrator | 19:14:53.619 STDOUT terraform:  + key_pair = "testbed" 2025-05-31 19:14:53.619901 | orchestrator | 19:14:53.619 STDOUT terraform:  + name = "testbed-manager" 2025-05-31 19:14:53.619934 | orchestrator | 19:14:53.619 STDOUT terraform:  + power_state = "active" 2025-05-31 19:14:53.619979 | orchestrator | 19:14:53.619 STDOUT terraform:  + region = (known after apply) 2025-05-31 19:14:53.620024 | orchestrator | 19:14:53.619 STDOUT terraform:  + security_groups = (known after apply) 2025-05-31 19:14:53.620072 | orchestrator | 19:14:53.620 STDOUT terraform:  + stop_before_destroy = false 2025-05-31 19:14:53.620104 | orchestrator | 19:14:53.620 STDOUT terraform:  + updated = (known after apply) 2025-05-31 19:14:53.620165 | orchestrator | 19:14:53.620 STDOUT terraform:  + user_data = (known after apply) 2025-05-31 19:14:53.620175 | orchestrator | 19:14:53.620 STDOUT terraform:  + block_device { 2025-05-31 19:14:53.620202 | orchestrator | 19:14:53.620 STDOUT terraform:  + boot_index = 0 2025-05-31 19:14:53.620236 | orchestrator | 19:14:53.620 STDOUT terraform:  + delete_on_termination = false 2025-05-31 19:14:53.620274 | orchestrator | 19:14:53.620 STDOUT terraform:  + destination_type = "volume" 2025-05-31 19:14:53.620311 | orchestrator | 19:14:53.620 STDOUT terraform:  + multiattach = false 2025-05-31 19:14:53.620349 | orchestrator | 19:14:53.620 STDOUT terraform:  + source_type = "volume" 2025-05-31 19:14:53.620398 | orchestrator | 19:14:53.620 STDOUT terraform:  + uuid = (known after apply) 2025-05-31 19:14:53.620409 | orchestrator | 19:14:53.620 STDOUT terraform:  } 2025-05-31 19:14:53.620418 | orchestrator | 19:14:53.620 STDOUT terraform:  + network { 2025-05-31 19:14:53.620454 | orchestrator | 19:14:53.620 STDOUT terraform:  + access_network = false 2025-05-31 19:14:53.620494 | orchestrator | 19:14:53.620 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-31 19:14:53.620533 | orchestrator | 19:14:53.620 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-31 19:14:53.620573 | orchestrator | 19:14:53.620 STDOUT terraform:  + mac = (known after apply) 2025-05-31 19:14:53.620628 | orchestrator | 19:14:53.620 STDOUT terraform:  + name = (known after apply) 2025-05-31 19:14:53.620679 | orchestrator | 19:14:53.620 STDOUT terraform:  + port = (known after apply) 2025-05-31 19:14:53.620719 | orchestrator | 19:14:53.620 STDOUT terraform:  + uuid = (known after apply) 2025-05-31 19:14:53.620737 | orchestrator | 19:14:53.620 STDOUT terraform:  } 2025-05-31 19:14:53.620747 | orchestrator | 19:14:53.620 STDOUT terraform:  } 2025-05-31 19:14:53.620806 | orchestrator | 19:14:53.620 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-05-31 19:14:53.620860 | orchestrator | 19:14:53.620 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-31 19:14:53.620906 | orchestrator | 19:14:53.620 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-31 19:14:53.620950 | orchestrator | 19:14:53.620 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-31 19:14:53.620995 | orchestrator | 19:14:53.620 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-31 19:14:53.621043 | orchestrator | 19:14:53.620 STDOUT terraform:  + all_tags = (known after apply) 2025-05-31 19:14:53.621068 | orchestrator | 19:14:53.621 STDOUT terraform:  + availability_zone = "nova" 2025-05-31 19:14:53.621092 | orchestrator | 19:14:53.621 STDOUT terraform:  + config_drive = true 2025-05-31 19:14:53.621136 | orchestrator | 19:14:53.621 STDOUT terraform:  + created = (known after apply) 2025-05-31 19:14:53.621183 | orchestrator | 19:14:53.621 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-31 19:14:53.621221 | orchestrator | 19:14:53.621 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-31 19:14:53.621254 | orchestrator | 19:14:53.621 STDOUT terraform:  + force_delete = false 2025-05-31 19:14:53.621298 | orchestrator | 19:14:53.621 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-05-31 19:14:53.621345 | orchestrator | 19:14:53.621 STDOUT terraform:  + id = (known after apply) 2025-05-31 19:14:53.621392 | orchestrator | 19:14:53.621 STDOUT terraform:  + image_id = (known after apply) 2025-05-31 19:14:53.621436 | orchestrator | 19:14:53.621 STDOUT terraform:  + image_name = (known after apply) 2025-05-31 19:14:53.621469 | orchestrator | 19:14:53.621 STDOUT terraform:  + key_pair = "testbed" 2025-05-31 19:14:53.621508 | orchestrator | 19:14:53.621 STDOUT terraform:  + name = "testbed-node-0" 2025-05-31 19:14:53.621532 | orchestrator | 19:14:53.621 STDOUT terraform:  + power_state = "active" 2025-05-31 19:14:53.621604 | orchestrator | 19:14:53.621 STDOUT terraform:  + region = (known after apply) 2025-05-31 19:14:53.621653 | orchestrator | 19:14:53.621 STDOUT terraform:  + security_groups = (known after apply) 2025-05-31 19:14:53.621678 | orchestrator | 19:14:53.621 STDOUT terraform:  + stop_before_destroy = false 2025-05-31 19:14:53.621722 | orchestrator | 19:14:53.621 STDOUT terraform:  + updated = (known after apply) 2025-05-31 19:14:53.621783 | orchestrator | 19:14:53.621 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-31 19:14:53.621794 | orchestrator | 19:14:53.621 STDOUT terraform:  + block_device { 2025-05-31 19:14:53.621829 | orchestrator | 19:14:53.621 STDOUT terraform:  + boot_index = 0 2025-05-31 19:14:53.621863 | orchestrator | 19:14:53.621 STDOUT terraform:  + delete_on_termination = false 2025-05-31 19:14:53.621898 | orchestrator | 19:14:53.621 STDOUT terraform:  + destination_type = "volume" 2025-05-31 19:14:53.621929 | orchestrator | 19:14:53.621 STDOUT terraform:  + multiattach = false 2025-05-31 19:14:53.621966 | orchestrator | 19:14:53.621 STDOUT terraform:  + source_type = "volume" 2025-05-31 19:14:53.622033 | orchestrator | 19:14:53.621 STDOUT terraform:  + uuid = (known after apply) 2025-05-31 19:14:53.622079 | orchestrator | 19:14:53.622 STDOUT terraform:  } 2025-05-31 19:14:53.622089 | orchestrator | 19:14:53.622 STDOUT terraform:  + network { 2025-05-31 19:14:53.622122 | orchestrator | 19:14:53.622 STDOUT terraform:  + access_network = false 2025-05-31 19:14:53.622161 | orchestrator | 19:14:53.622 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-31 19:14:53.622199 | orchestrator | 19:14:53.622 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-31 19:14:53.622240 | orchestrator | 19:14:53.622 STDOUT terraform:  + mac = (known after apply) 2025-05-31 19:14:53.622279 | orchestrator | 19:14:53.622 STDOUT terraform:  + name = (known after apply) 2025-05-31 19:14:53.622318 | orchestrator | 19:14:53.622 STDOUT terraform:  + port = (known after apply) 2025-05-31 19:14:53.622358 | orchestrator | 19:14:53.622 STDOUT terraform:  + uuid = (known after apply) 2025-05-31 19:14:53.622368 | orchestrator | 19:14:53.622 STDOUT terraform:  } 2025-05-31 19:14:53.622376 | orchestrator | 19:14:53.622 STDOUT terraform:  } 2025-05-31 19:14:53.622437 | orchestrator | 19:14:53.622 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-05-31 19:14:53.622488 | orchestrator | 19:14:53.622 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-31 19:14:53.622530 | orchestrator | 19:14:53.622 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-31 19:14:53.622573 | orchestrator | 19:14:53.622 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-31 19:14:53.622617 | orchestrator | 19:14:53.622 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-31 19:14:53.622671 | orchestrator | 19:14:53.622 STDOUT terraform:  + all_tags = (known after apply) 2025-05-31 19:14:53.622696 | orchestrator | 19:14:53.622 STDOUT terraform:  + availability_zone = "nova" 2025-05-31 19:14:53.622706 | orchestrator | 19:14:53.622 STDOUT terraform:  + config_drive = true 2025-05-31 19:14:53.622758 | orchestrator | 19:14:53.622 STDOUT terraform:  + created = (known after apply) 2025-05-31 19:14:53.622801 | orchestrator | 19:14:53.622 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-31 19:14:53.622835 | orchestrator | 19:14:53.622 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-31 19:14:53.622864 | orchestrator | 19:14:53.622 STDOUT terraform:  + force_delete = false 2025-05-31 19:14:53.622905 | orchestrator | 19:14:53.622 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-05-31 19:14:53.622947 | orchestrator | 19:14:53.622 STDOUT terraform:  + id = (known after apply) 2025-05-31 19:14:53.622990 | orchestrator | 19:14:53.622 STDOUT terraform:  + image_id = (known after apply) 2025-05-31 19:14:53.623032 | orchestrator | 19:14:53.622 STDOUT terraform:  + image_name = (known after apply) 2025-05-31 19:14:53.623059 | orchestrator | 19:14:53.623 STDOUT terraform:  + key_pair = "testbed" 2025-05-31 19:14:53.623095 | orchestrator | 19:14:53.623 STDOUT terraform:  + name = "testbed-node-1" 2025-05-31 19:14:53.623125 | orchestrator | 19:14:53.623 STDOUT terraform:  + power_state = "active" 2025-05-31 19:14:53.623168 | orchestrator | 19:14:53.623 STDOUT terraform:  + region = (known after apply) 2025-05-31 19:14:53.623211 | orchestrator | 19:14:53.623 STDOUT terraform:  + security_groups = (known after apply) 2025-05-31 19:14:53.623239 | orchestrator | 19:14:53.623 STDOUT terraform:  + stop_before_destroy = false 2025-05-31 19:14:53.623283 | orchestrator | 19:14:53.623 STDOUT terraform:  + updated = (known after apply) 2025-05-31 19:14:53.623342 | orchestrator | 19:14:53.623 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-31 19:14:53.623351 | orchestrator | 19:14:53.623 STDOUT terraform:  + block_device { 2025-05-31 19:14:53.623387 | orchestrator | 19:14:53.623 STDOUT terraform:  + boot_index = 0 2025-05-31 19:14:53.623422 | orchestrator | 19:14:53.623 STDOUT terraform:  + delete_on_termination = false 2025-05-31 19:14:53.623457 | orchestrator | 19:14:53.623 STDOUT terraform:  + destination_type = "volume" 2025-05-31 19:14:53.623492 | orchestrator | 19:14:53.623 STDOUT terraform:  + multiattach = false 2025-05-31 19:14:53.623537 | orchestrator | 19:14:53.623 STDOUT terraform:  + source_type = "volume" 2025-05-31 19:14:53.623576 | orchestrator | 19:14:53.623 STDOUT terraform:  + uuid = (known after apply) 2025-05-31 19:14:53.623599 | orchestrator | 19:14:53.623 STDOUT terraform:  } 2025-05-31 19:14:53.623634 | orchestrator | 19:14:53.623 STDOUT terraform:  + network { 2025-05-31 19:14:53.623643 | orchestrator | 19:14:53.623 STDOUT terraform:  + access_network = false 2025-05-31 19:14:53.623686 | orchestrator | 19:14:53.623 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-31 19:14:53.623725 | orchestrator | 19:14:53.623 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-31 19:14:53.623764 | orchestrator | 19:14:53.623 STDOUT terraform:  + mac = (known after apply) 2025-05-31 19:14:53.623803 | orchestrator | 19:14:53.623 STDOUT terraform:  + name = (known after apply) 2025-05-31 19:14:53.623842 | orchestrator | 19:14:53.623 STDOUT terraform:  + port = (known after apply) 2025-05-31 19:14:53.623881 | orchestrator | 19:14:53.623 STDOUT terraform:  + uuid = (known after apply) 2025-05-31 19:14:53.623890 | orchestrator | 19:14:53.623 STDOUT terraform:  } 2025-05-31 19:14:53.623898 | orchestrator | 19:14:53.623 STDOUT terraform:  } 2025-05-31 19:14:53.623959 | orchestrator | 19:14:53.623 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-05-31 19:14:53.624009 | orchestrator | 19:14:53.623 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-31 19:14:53.624052 | orchestrator | 19:14:53.623 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-31 19:14:53.624096 | orchestrator | 19:14:53.624 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-31 19:14:53.624137 | orchestrator | 19:14:53.624 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-31 19:14:53.624180 | orchestrator | 19:14:53.624 STDOUT terraform:  + all_tags = (known after apply) 2025-05-31 19:14:53.624209 | orchestrator | 19:14:53.624 STDOUT terraform:  + availability_zone = "nova" 2025-05-31 19:14:53.624231 | orchestrator | 19:14:53.624 STDOUT terraform:  + config_drive = true 2025-05-31 19:14:53.624274 | orchestrator | 19:14:53.624 STDOUT terraform:  + created = (known after apply) 2025-05-31 19:14:53.624317 | orchestrator | 19:14:53.624 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-31 19:14:53.624355 | orchestrator | 19:14:53.624 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-31 19:14:53.624384 | orchestrator | 19:14:53.624 STDOUT terraform:  + force_delete = false 2025-05-31 19:14:53.624426 | orchestrator | 19:14:53.624 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-05-31 19:14:53.624470 | orchestrator | 19:14:53.624 STDOUT terraform:  + id = (known after apply) 2025-05-31 19:14:53.624511 | orchestrator | 19:14:53.624 STDOUT terraform:  + image_id = (known after apply) 2025-05-31 19:14:53.624553 | orchestrator | 19:14:53.624 STDOUT terraform:  + image_name = (known after apply) 2025-05-31 19:14:53.624610 | orchestrator | 19:14:53.624 STDOUT terraform:  + key_pair = "testbed" 2025-05-31 19:14:53.624649 | orchestrator | 19:14:53.624 STDOUT terraform:  + name = "testbed-node-2" 2025-05-31 19:14:53.624680 | orchestrator | 19:14:53.624 STDOUT terraform:  + power_state = "active" 2025-05-31 19:14:53.624722 | orchestrator | 19:14:53.624 STDOUT terraform:  + region = (known after apply) 2025-05-31 19:14:53.624765 | orchestrator | 19:14:53.624 STDOUT terraform:  + security_groups = (known after apply) 2025-05-31 19:14:53.624787 | orchestrator | 19:14:53.624 STDOUT terraform:  + stop_before_destroy = false 2025-05-31 19:14:53.624831 | orchestrator | 19:14:53.624 STDOUT terraform:  + updated = (known after apply) 2025-05-31 19:14:53.624890 | orchestrator | 19:14:53.624 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-31 19:14:53.624899 | orchestrator | 19:14:53.624 STDOUT terraform:  + block_device { 2025-05-31 19:14:53.624933 | orchestrator | 19:14:53.624 STDOUT terraform:  + boot_index = 0 2025-05-31 19:14:53.624965 | orchestrator | 19:14:53.624 STDOUT terraform:  + delete_on_termination = false 2025-05-31 19:14:53.625001 | orchestrator | 19:14:53.624 STDOUT terraform:  + destination_type = "volume" 2025-05-31 19:14:53.625038 | orchestrator | 19:14:53.624 STDOUT terraform:  + multiattach = false 2025-05-31 19:14:53.625074 | orchestrator | 19:14:53.625 STDOUT terraform:  + source_type = "volume" 2025-05-31 19:14:53.625125 | orchestrator | 19:14:53.625 STDOUT terraform:  + uuid = (known after apply) 2025-05-31 19:14:53.625135 | orchestrator | 19:14:53.625 STDOUT terraform:  } 2025-05-31 19:14:53.625157 | orchestrator | 19:14:53.625 STDOUT terraform:  + network { 2025-05-31 19:14:53.625180 | orchestrator | 19:14:53.625 STDOUT terraform:  + access_network = false 2025-05-31 19:14:53.625214 | orchestrator | 19:14:53.625 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-31 19:14:53.625247 | orchestrator | 19:14:53.625 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-31 19:14:53.625281 | orchestrator | 19:14:53.625 STDOUT terraform:  + mac = (known after apply) 2025-05-31 19:14:53.625317 | orchestrator | 19:14:53.625 STDOUT terraform:  + name = (known after apply) 2025-05-31 19:14:53.625351 | orchestrator | 19:14:53.625 STDOUT terraform:  + port = (known after apply) 2025-05-31 19:14:53.625386 | orchestrator | 19:14:53.625 STDOUT terraform:  + uuid = (known after apply) 2025-05-31 19:14:53.625394 | orchestrator | 19:14:53.625 STDOUT terraform:  } 2025-05-31 19:14:53.625412 | orchestrator | 19:14:53.625 STDOUT terraform:  } 2025-05-31 19:14:53.625456 | orchestrator | 19:14:53.625 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-05-31 19:14:53.625504 | orchestrator | 19:14:53.625 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-31 19:14:53.625541 | orchestrator | 19:14:53.625 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-31 19:14:53.625592 | orchestrator | 19:14:53.625 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-31 19:14:53.625645 | orchestrator | 19:14:53.625 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-31 19:14:53.625669 | orchestrator | 19:14:53.625 STDOUT terraform:  + all_tags = (known after apply) 2025-05-31 19:14:53.625698 | orchestrator | 19:14:53.625 STDOUT terraform:  + availability_zone = "nova" 2025-05-31 19:14:53.625720 | orchestrator | 19:14:53.625 STDOUT terraform:  + config_drive = true 2025-05-31 19:14:53.625760 | orchestrator | 19:14:53.625 STDOUT terraform:  + created = (known after apply) 2025-05-31 19:14:53.625799 | orchestrator | 19:14:53.625 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-31 19:14:53.625843 | orchestrator | 19:14:53.625 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-31 19:14:53.625851 | orchestrator | 19:14:53.625 STDOUT terraform:  + force_delete = false 2025-05-31 19:14:53.625896 | orchestrator | 19:14:53.625 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-05-31 19:14:53.625934 | orchestrator | 19:14:53.625 STDOUT terraform:  + id = (known after apply) 2025-05-31 19:14:53.625972 | orchestrator | 19:14:53.625 STDOUT terraform:  + image_id = (known after apply) 2025-05-31 19:14:53.626011 | orchestrator | 19:14:53.625 STDOUT terraform:  + image_name = (known after apply) 2025-05-31 19:14:53.626055 | orchestrator | 19:14:53.626 STDOUT terraform:  + key_pair = "testbed" 2025-05-31 19:14:53.626085 | orchestrator | 19:14:53.626 STDOUT terraform:  + name = "testbed-node-3" 2025-05-31 19:14:53.626112 | orchestrator | 19:14:53.626 STDOUT terraform:  + power_state = "active" 2025-05-31 19:14:53.626152 | orchestrator | 19:14:53.626 STDOUT terraform:  + region = (known after apply) 2025-05-31 19:14:53.626192 | orchestrator | 19:14:53.626 STDOUT terraform:  + security_groups = (known after apply) 2025-05-31 19:14:53.626229 | orchestrator | 19:14:53.626 STDOUT terraform:  + stop_before_destroy = false 2025-05-31 19:14:53.626261 | orchestrator | 19:14:53.626 STDOUT terraform:  + updated = (known after apply) 2025-05-31 19:14:53.626314 | orchestrator | 19:14:53.626 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-31 19:14:53.626322 | orchestrator | 19:14:53.626 STDOUT terraform:  + block_device { 2025-05-31 19:14:53.626354 | orchestrator | 19:14:53.626 STDOUT terraform:  + boot_index = 0 2025-05-31 19:14:53.626385 | orchestrator | 19:14:53.626 STDOUT terraform:  + delete_on_termination = false 2025-05-31 19:14:53.626418 | orchestrator | 19:14:53.626 STDOUT terraform:  + destination_type = "volume" 2025-05-31 19:14:53.626450 | orchestrator | 19:14:53.626 STDOUT terraform:  + multiattach = false 2025-05-31 19:14:53.626485 | orchestrator | 19:14:53.626 STDOUT terraform:  + source_type = "volume" 2025-05-31 19:14:53.626527 | orchestrator | 19:14:53.626 STDOUT terraform:  + uuid = (known after apply) 2025-05-31 19:14:53.626535 | orchestrator | 19:14:53.626 STDOUT terraform:  } 2025-05-31 19:14:53.626555 | orchestrator | 19:14:53.626 STDOUT terraform:  + network { 2025-05-31 19:14:53.626613 | orchestrator | 19:14:53.626 STDOUT terraform:  + access_network = false 2025-05-31 19:14:53.626635 | orchestrator | 19:14:53.626 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-31 19:14:53.626671 | orchestrator | 19:14:53.626 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-31 19:14:53.626705 | orchestrator | 19:14:53.626 STDOUT terraform:  + mac = (known after apply) 2025-05-31 19:14:53.626739 | orchestrator | 19:14:53.626 STDOUT terraform:  + name = (known after apply) 2025-05-31 19:14:53.626774 | orchestrator | 19:14:53.626 STDOUT terraform:  + port = (known after apply) 2025-05-31 19:14:53.626808 | orchestrator | 19:14:53.626 STDOUT terraform:  + uuid = (known after apply) 2025-05-31 19:14:53.626816 | orchestrator | 19:14:53.626 STDOUT terraform:  } 2025-05-31 19:14:53.626823 | orchestrator | 19:14:53.626 STDOUT terraform:  } 2025-05-31 19:14:53.626876 | orchestrator | 19:14:53.626 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-05-31 19:14:53.626920 | orchestrator | 19:14:53.626 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-31 19:14:53.626958 | orchestrator | 19:14:53.626 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-31 19:14:53.626994 | orchestrator | 19:14:53.626 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-31 19:14:53.627029 | orchestrator | 19:14:53.626 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-31 19:14:53.627066 | orchestrator | 19:14:53.627 STDOUT terraform:  + all_tags = (known after apply) 2025-05-31 19:14:53.627091 | orchestrator | 19:14:53.627 STDOUT terraform:  + availability_zone = "nova" 2025-05-31 19:14:53.627110 | orchestrator | 19:14:53.627 STDOUT terraform:  + config_drive = true 2025-05-31 19:14:53.627146 | orchestrator | 19:14:53.627 STDOUT terraform:  + created = (known after apply) 2025-05-31 19:14:53.627181 | orchestrator | 19:14:53.627 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-31 19:14:53.627211 | orchestrator | 19:14:53.627 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-31 19:14:53.627236 | orchestrator | 19:14:53.627 STDOUT terraform:  + force_delete = false 2025-05-31 19:14:53.627272 | orchestrator | 19:14:53.627 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-05-31 19:14:53.627309 | orchestrator | 19:14:53.627 STDOUT terraform:  + id = (known after apply) 2025-05-31 19:14:53.627344 | orchestrator | 19:14:53.627 STDOUT terraform:  + image_id = (known after apply) 2025-05-31 19:14:53.627379 | orchestrator | 19:14:53.627 STDOUT terraform:  + image_name = (known after apply) 2025-05-31 19:14:53.627404 | orchestrator | 19:14:53.627 STDOUT terraform:  + key_pair = "testbed" 2025-05-31 19:14:53.627435 | orchestrator | 19:14:53.627 STDOUT terraform:  + name = "testbed-node-4" 2025-05-31 19:14:53.627461 | orchestrator | 19:14:53.627 STDOUT terraform:  + power_state = "active" 2025-05-31 19:14:53.627496 | orchestrator | 19:14:53.627 STDOUT terraform:  + region = (known after apply) 2025-05-31 19:14:53.627533 | orchestrator | 19:14:53.627 STDOUT terraform:  + security_groups = (known after apply) 2025-05-31 19:14:53.627553 | orchestrator | 19:14:53.627 STDOUT terraform:  + stop_before_destroy = false 2025-05-31 19:14:53.627600 | orchestrator | 19:14:53.627 STDOUT terraform:  + updated = (known after apply) 2025-05-31 19:14:53.627646 | orchestrator | 19:14:53.627 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-31 19:14:53.627654 | orchestrator | 19:14:53.627 STDOUT terraform:  + block_device { 2025-05-31 19:14:53.627683 | orchestrator | 19:14:53.627 STDOUT terraform:  + boot_index = 0 2025-05-31 19:14:53.627711 | orchestrator | 19:14:53.627 STDOUT terraform:  + delete_on_termination = false 2025-05-31 19:14:53.627741 | orchestrator | 19:14:53.627 STDOUT terraform:  + destination_type = "volume" 2025-05-31 19:14:53.627774 | orchestrator | 19:14:53.627 STDOUT terraform:  + multiattach = false 2025-05-31 19:14:53.627801 | orchestrator | 19:14:53.627 STDOUT terraform:  + source_type = "volume" 2025-05-31 19:14:53.627841 | orchestrator | 19:14:53.627 STDOUT terraform:  + uuid = (known after apply) 2025-05-31 19:14:53.627849 | orchestrator | 19:14:53.627 STDOUT terraform:  } 2025-05-31 19:14:53.627855 | orchestrator | 19:14:53.627 STDOUT terraform:  + network { 2025-05-31 19:14:53.627882 | orchestrator | 19:14:53.627 STDOUT terraform:  + access_network = false 2025-05-31 19:14:53.627913 | orchestrator | 19:14:53.627 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-31 19:14:53.627943 | orchestrator | 19:14:53.627 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-31 19:14:53.627975 | orchestrator | 19:14:53.627 STDOUT terraform:  + mac = (known after apply) 2025-05-31 19:14:53.628006 | orchestrator | 19:14:53.627 STDOUT terraform:  + name = (known after apply) 2025-05-31 19:14:53.628039 | orchestrator | 19:14:53.627 STDOUT terraform:  + port = (known after apply) 2025-05-31 19:14:53.628072 | orchestrator | 19:14:53.628 STDOUT terraform:  + uuid = (known after apply) 2025-05-31 19:14:53.628084 | orchestrator | 19:14:53.628 STDOUT terraform:  } 2025-05-31 19:14:53.628089 | orchestrator | 19:14:53.628 STDOUT terraform:  } 2025-05-31 19:14:53.628134 | orchestrator | 19:14:53.628 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-05-31 19:14:53.628175 | orchestrator | 19:14:53.628 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-31 19:14:53.628210 | orchestrator | 19:14:53.628 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-31 19:14:53.628244 | orchestrator | 19:14:53.628 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-31 19:14:53.628280 | orchestrator | 19:14:53.628 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-31 19:14:53.628317 | orchestrator | 19:14:53.628 STDOUT terraform:  + all_tags = (known after apply) 2025-05-31 19:14:53.628342 | orchestrator | 19:14:53.628 STDOUT terraform:  + availability_zone = "nova" 2025-05-31 19:14:53.628361 | orchestrator | 19:14:53.628 STDOUT terraform:  + config_drive = true 2025-05-31 19:14:53.628395 | orchestrator | 19:14:53.628 STDOUT terraform:  + created = (known after apply) 2025-05-31 19:14:53.628430 | orchestrator | 19:14:53.628 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-31 19:14:53.628460 | orchestrator | 19:14:53.628 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-31 19:14:53.628479 | orchestrator | 19:14:53.628 STDOUT terraform:  + force_delete = false 2025-05-31 19:14:53.628515 | orchestrator | 19:14:53.628 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-05-31 19:14:53.628551 | orchestrator | 19:14:53.628 STDOUT terraform:  + id = (known after apply) 2025-05-31 19:14:53.628601 | orchestrator | 19:14:53.628 STDOUT terraform:  + image_id = (known after apply) 2025-05-31 19:14:53.628644 | orchestrator | 19:14:53.628 STDOUT terraform:  + image_name = (known after apply) 2025-05-31 19:14:53.628665 | orchestrator | 19:14:53.628 STDOUT terraform:  + key_pair = "testbed" 2025-05-31 19:14:53.628696 | orchestrator | 19:14:53.628 STDOUT terraform:  + name = "testbed-node-5" 2025-05-31 19:14:53.628723 | orchestrator | 19:14:53.628 STDOUT terraform:  + power_state = "active" 2025-05-31 19:14:53.628759 | orchestrator | 19:14:53.628 STDOUT terraform:  + region = (known after apply) 2025-05-31 19:14:53.628794 | orchestrator | 19:14:53.628 STDOUT terraform:  + security_groups = (known after apply) 2025-05-31 19:14:53.628813 | orchestrator | 19:14:53.628 STDOUT terraform:  + stop_before_destroy = false 2025-05-31 19:14:53.628848 | orchestrator | 19:14:53.628 STDOUT terraform:  + updated = (known after apply) 2025-05-31 19:14:53.628899 | orchestrator | 19:14:53.628 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-31 19:14:53.628906 | orchestrator | 19:14:53.628 STDOUT terraform:  + block_device { 2025-05-31 19:14:53.628936 | orchestrator | 19:14:53.628 STDOUT terraform:  + boot_index = 0 2025-05-31 19:14:53.628964 | orchestrator | 19:14:53.628 STDOUT terraform:  + delete_on_termination = false 2025-05-31 19:14:53.628993 | orchestrator | 19:14:53.628 STDOUT terraform:  + destination_type = "volume" 2025-05-31 19:14:53.629017 | orchestrator | 19:14:53.628 STDOUT terraform:  + multiattach = false 2025-05-31 19:14:53.629048 | orchestrator | 19:14:53.629 STDOUT terraform:  + source_type = "volume" 2025-05-31 19:14:53.629086 | orchestrator | 19:14:53.629 STDOUT terraform:  + uuid = (known after apply) 2025-05-31 19:14:53.629094 | orchestrator | 19:14:53.629 STDOUT terraform:  } 2025-05-31 19:14:53.629100 | orchestrator | 19:14:53.629 STDOUT terraform:  + network { 2025-05-31 19:14:53.629129 | orchestrator | 19:14:53.629 STDOUT terraform:  + access_network = false 2025-05-31 19:14:53.629162 | orchestrator | 19:14:53.629 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-31 19:14:53.629193 | orchestrator | 19:14:53.629 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-31 19:14:53.629225 | orchestrator | 19:14:53.629 STDOUT terraform:  + mac = (known after apply) 2025-05-31 19:14:53.629256 | orchestrator | 19:14:53.629 STDOUT terraform:  + name = (known after apply) 2025-05-31 19:14:53.629287 | orchestrator | 19:14:53.629 STDOUT terraform:  + port = (known after apply) 2025-05-31 19:14:53.629319 | orchestrator | 19:14:53.629 STDOUT terraform:  + uuid = (known after apply) 2025-05-31 19:14:53.629327 | orchestrator | 19:14:53.629 STDOUT terraform:  } 2025-05-31 19:14:53.629333 | orchestrator | 19:14:53.629 STDOUT terraform:  } 2025-05-31 19:14:53.629374 | orchestrator | 19:14:53.629 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-05-31 19:14:53.629410 | orchestrator | 19:14:53.629 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-05-31 19:14:53.629439 | orchestrator | 19:14:53.629 STDOUT terraform:  + fingerprint = (known after apply) 2025-05-31 19:14:53.629469 | orchestrator | 19:14:53.629 STDOUT terraform:  + id = (known after apply) 2025-05-31 19:14:53.629488 | orchestrator | 19:14:53.629 STDOUT terraform:  + name = "testbed" 2025-05-31 19:14:53.629506 | orchestrator | 19:14:53.629 STDOUT terraform:  + private_key = (sensitive value) 2025-05-31 19:14:53.629538 | orchestrator | 19:14:53.629 STDOUT terraform:  + public_key = (known after apply) 2025-05-31 19:14:53.629561 | orchestrator | 19:14:53.629 STDOUT terraform:  + region = (known after apply) 2025-05-31 19:14:53.629615 | orchestrator | 19:14:53.629 STDOUT terraform:  + user_id = (known after apply) 2025-05-31 19:14:53.629624 | orchestrator | 19:14:53.629 STDOUT terraform:  } 2025-05-31 19:14:53.629674 | orchestrator | 19:14:53.629 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-05-31 19:14:53.629722 | orchestrator | 19:14:53.629 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-31 19:14:53.629752 | orchestrator | 19:14:53.629 STDOUT terraform:  + device = (known after apply) 2025-05-31 19:14:53.629780 | orchestrator | 19:14:53.629 STDOUT terraform:  + id = (known after apply) 2025-05-31 19:14:53.629809 | orchestrator | 19:14:53.629 STDOUT terraform:  + instance_id = (known after apply) 2025-05-31 19:14:53.629838 | orchestrator | 19:14:53.629 STDOUT terraform:  + region = (known after apply) 2025-05-31 19:14:53.629866 | orchestrator | 19:14:53.629 STDOUT terraform:  + volume_id = (known after apply) 2025-05-31 19:14:53.629878 | orchestrator | 19:14:53.629 STDOUT terraform:  } 2025-05-31 19:14:53.629929 | orchestrator | 19:14:53.629 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-05-31 19:14:53.629984 | orchestrator | 19:14:53.629 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-31 19:14:53.630005 | orchestrator | 19:14:53.629 STDOUT terraform:  + device = (known after apply) 2025-05-31 19:14:53.630227 | orchestrator | 19:14:53.629 STDOUT terraform:  + id = (known after apply) 2025-05-31 19:14:53.630238 | orchestrator | 19:14:53.630 STDOUT terraform:  + instance_id = (known after apply) 2025-05-31 19:14:53.630242 | orchestrator | 19:14:53.630 STDOUT terraform:  + region = (known after apply) 2025-05-31 19:14:53.630246 | orchestrator | 19:14:53.630 STDOUT terraform:  + volume_id = (known after apply) 2025-05-31 19:14:53.630250 | orchestrator | 19:14:53.630 STDOUT terraform:  } 2025-05-31 19:14:53.630255 | orchestrator | 19:14:53.630 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-05-31 19:14:53.630259 | orchestrator | 19:14:53.630 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-31 19:14:53.630265 | orchestrator | 19:14:53.630 STDOUT terraform:  + device = (known after apply) 2025-05-31 19:14:53.630271 | orchestrator | 19:14:53.630 STDOUT terraform:  + id = (known after apply) 2025-05-31 19:14:53.630328 | orchestrator | 19:14:53.630 STDOUT terraform:  + instance_id = (known after apply) 2025-05-31 19:14:53.630333 | orchestrator | 19:14:53.630 STDOUT terraform:  + region = (known after apply) 2025-05-31 19:14:53.631099 | orchestrator | 19:14:53.630 STDOUT terraform:  + volume_id = (known after apply) 2025-05-31 19:14:53.631111 | orchestrator | 19:14:53.630 STDOUT terraform:  } 2025-05-31 19:14:53.631117 | orchestrator | 19:14:53.630 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-05-31 19:14:53.631122 | orchestrator | 19:14:53.630 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-31 19:14:53.631126 | orchestrator | 19:14:53.630 STDOUT terraform:  + device = (known after apply) 2025-05-31 19:14:53.631131 | orchestrator | 19:14:53.630 STDOUT terraform:  + id = (known after apply) 2025-05-31 19:14:53.631136 | orchestrator | 19:14:53.630 STDOUT terraform:  + instance_id = (known after apply) 2025-05-31 19:14:53.631141 | orchestrator | 19:14:53.630 STDOUT terraform:  + region = (known after apply) 2025-05-31 19:14:53.631145 | orchestrator | 19:14:53.630 STDOUT terraform:  + volume_id = (known after apply) 2025-05-31 19:14:53.631150 | orchestrator | 19:14:53.630 STDOUT terraform:  } 2025-05-31 19:14:53.631155 | orchestrator | 19:14:53.630 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-05-31 19:14:53.631160 | orchestrator | 19:14:53.630 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-31 19:14:53.631165 | orchestrator | 19:14:53.630 STDOUT terraform:  + device = (known after apply) 2025-05-31 19:14:53.631169 | orchestrator | 19:14:53.630 STDOUT terraform:  + id = (known after apply) 2025-05-31 19:14:53.631182 | orchestrator | 19:14:53.630 STDOUT terraform:  + instance_id = (known after apply) 2025-05-31 19:14:53.631187 | orchestrator | 19:14:53.630 STDOUT terraform:  + region = (known after apply) 2025-05-31 19:14:53.631195 | orchestrator | 19:14:53.630 STDOUT terraform:  + volume_id = (known after apply) 2025-05-31 19:14:53.631200 | orchestrator | 19:14:53.630 STDOUT terraform:  } 2025-05-31 19:14:53.631205 | orchestrator | 19:14:53.630 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-05-31 19:14:53.631210 | orchestrator | 19:14:53.630 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-31 19:14:53.631214 | orchestrator | 19:14:53.630 STDOUT terraform:  + device = (known after apply) 2025-05-31 19:14:53.631219 | orchestrator | 19:14:53.630 STDOUT terraform:  + id = (known after apply) 2025-05-31 19:14:53.631224 | orchestrator | 19:14:53.630 STDOUT terraform:  + instance_id = (known after apply) 2025-05-31 19:14:53.631228 | orchestrator | 19:14:53.630 STDOUT terraform:  + region = (known after apply) 2025-05-31 19:14:53.631233 | orchestrator | 19:14:53.631 STDOUT terraform:  + volume_id = (known after apply) 2025-05-31 19:14:53.631237 | orchestrator | 19:14:53.631 STDOUT terraform:  } 2025-05-31 19:14:53.631242 | orchestrator | 19:14:53.631 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-05-31 19:14:53.631250 | orchestrator | 19:14:53.631 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-31 19:14:53.631255 | orchestrator | 19:14:53.631 STDOUT terraform:  + device = (known after apply) 2025-05-31 19:14:53.631259 | orchestrator | 19:14:53.631 STDOUT terraform:  + id = (known after apply) 2025-05-31 19:14:53.631264 | orchestrator | 19:14:53.631 STDOUT terraform:  + instance_id = (known after apply) 2025-05-31 19:14:53.631269 | orchestrator | 19:14:53.631 STDOUT terraform:  + region = (known after apply) 2025-05-31 19:14:53.631276 | orchestrator | 19:14:53.631 STDOUT terraform:  + volume_id = (known after apply) 2025-05-31 19:14:53.631281 | orchestrator | 19:14:53.631 STDOUT terraform:  } 2025-05-31 19:14:53.631321 | orchestrator | 19:14:53.631 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-05-31 19:14:53.631368 | orchestrator | 19:14:53.631 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-31 19:14:53.631396 | orchestrator | 19:14:53.631 STDOUT terraform:  + device = (known after apply) 2025-05-31 19:14:53.631426 | orchestrator | 19:14:53.631 STDOUT terraform:  + id = (known after apply) 2025-05-31 19:14:53.631455 | orchestrator | 19:14:53.631 STDOUT terraform:  + instance_id = (known after apply) 2025-05-31 19:14:53.631484 | orchestrator | 19:14:53.631 STDOUT terraform:  + region = (known after apply) 2025-05-31 19:14:53.631512 | orchestrator | 19:14:53.631 STDOUT terraform:  + volume_id = (known after apply) 2025-05-31 19:14:53.631519 | orchestrator | 19:14:53.631 STDOUT terraform:  } 2025-05-31 19:14:53.631572 | orchestrator | 19:14:53.631 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-05-31 19:14:53.631628 | orchestrator | 19:14:53.631 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-31 19:14:53.631660 | orchestrator | 19:14:53.631 STDOUT terraform:  + device = (known after apply) 2025-05-31 19:14:53.631689 | orchestrator | 19:14:53.631 STDOUT terraform:  + id = (known after apply) 2025-05-31 19:14:53.631719 | orchestrator | 19:14:53.631 STDOUT terraform:  + instance_id = (known after apply) 2025-05-31 19:14:53.631747 | orchestrator | 19:14:53.631 STDOUT terraform:  + region = (known after apply) 2025-05-31 19:14:53.631775 | orchestrator | 19:14:53.631 STDOUT terraform:  + volume_id = (known after apply) 2025-05-31 19:14:53.631782 | orchestrator | 19:14:53.631 STDOUT terraform:  } 2025-05-31 19:14:53.631844 | orchestrator | 19:14:53.631 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-05-31 19:14:53.631900 | orchestrator | 19:14:53.631 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-05-31 19:14:53.631929 | orchestrator | 19:14:53.631 STDOUT terraform:  + fixed_ip = (known after apply) 2025-05-31 19:14:53.631958 | orchestrator | 19:14:53.631 STDOUT terraform:  + floating_ip = (known after apply) 2025-05-31 19:14:53.631986 | orchestrator | 19:14:53.631 STDOUT terraform:  + id = (known after apply) 2025-05-31 19:14:53.632015 | orchestrator | 19:14:53.631 STDOUT terraform:  + port_id = (known after apply) 2025-05-31 19:14:53.632046 | orchestrator | 19:14:53.632 STDOUT terraform:  + region = (known after apply) 2025-05-31 19:14:53.632053 | orchestrator | 19:14:53.632 STDOUT terraform:  } 2025-05-31 19:14:53.632104 | orchestrator | 19:14:53.632 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-05-31 19:14:53.632152 | orchestrator | 19:14:53.632 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-05-31 19:14:53.632177 | orchestrator | 19:14:53.632 STDOUT terraform:  + address = (known after apply) 2025-05-31 19:14:53.632202 | orchestrator | 19:14:53.632 STDOUT terraform:  + all_tags = (known after apply) 2025-05-31 19:14:53.632228 | orchestrator | 19:14:53.632 STDOUT terraform:  + dns_domain = (known after apply) 2025-05-31 19:14:53.632254 | orchestrator | 19:14:53.632 STDOUT terraform:  + dns_name = (known after apply) 2025-05-31 19:14:53.632279 | orchestrator | 19:14:53.632 STDOUT terraform:  + fixed_ip = (known after apply) 2025-05-31 19:14:53.632306 | orchestrator | 19:14:53.632 STDOUT terraform:  + id = (known after apply) 2025-05-31 19:14:53.632323 | orchestrator | 19:14:53.632 STDOUT terraform:  + pool = "public" 2025-05-31 19:14:53.632348 | orchestrator | 19:14:53.632 STDOUT terraform:  + port_id = (known after apply) 2025-05-31 19:14:53.632374 | orchestrator | 19:14:53.632 STDOUT terraform:  + region = (known after apply) 2025-05-31 19:14:53.632416 | orchestrator | 19:14:53.632 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-31 19:14:53.632422 | orchestrator | 19:14:53.632 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-31 19:14:53.632427 | orchestrator | 19:14:53.632 STDOUT terraform:  } 2025-05-31 19:14:53.632477 | orchestrator | 19:14:53.632 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-05-31 19:14:53.632521 | orchestrator | 19:14:53.632 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-05-31 19:14:53.632558 | orchestrator | 19:14:53.632 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-31 19:14:53.632605 | orchestrator | 19:14:53.632 STDOUT terraform:  + all_tags = (known after apply) 2025-05-31 19:14:53.632628 | orchestrator | 19:14:53.632 STDOUT terraform:  + availability_zone_hints = [ 2025-05-31 19:14:53.632634 | orchestrator | 19:14:53.632 STDOUT terraform:  + "nova", 2025-05-31 19:14:53.632650 | orchestrator | 19:14:53.632 STDOUT terraform:  ] 2025-05-31 19:14:53.632689 | orchestrator | 19:14:53.632 STDOUT terraform:  + dns_domain = (known after apply) 2025-05-31 19:14:53.632725 | orchestrator | 19:14:53.632 STDOUT terraform:  + external = (known after apply) 2025-05-31 19:14:53.632763 | orchestrator | 19:14:53.632 STDOUT terraform:  + id = (known after apply) 2025-05-31 19:14:53.632801 | orchestrator | 19:14:53.632 STDOUT terraform:  + mtu = (known after apply) 2025-05-31 19:14:53.632840 | orchestrator | 19:14:53.632 STDOUT terraform:  + name = "net-testbed-management" 2025-05-31 19:14:53.632875 | orchestrator | 19:14:53.632 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-31 19:14:53.632912 | orchestrator | 19:14:53.632 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-31 19:14:53.632950 | orchestrator | 19:14:53.632 STDOUT terraform:  + region = (known after apply) 2025-05-31 19:14:53.632988 | orchestrator | 19:14:53.632 STDOUT terraform:  + shared = (known after apply) 2025-05-31 19:14:53.633025 | orchestrator | 19:14:53.632 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-31 19:14:53.633061 | orchestrator | 19:14:53.633 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-05-31 19:14:53.633089 | orchestrator | 19:14:53.633 STDOUT terraform:  + segments (known after apply) 2025-05-31 19:14:53.633095 | orchestrator | 19:14:53.633 STDOUT terraform:  } 2025-05-31 19:14:53.633143 | orchestrator | 19:14:53.633 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-05-31 19:14:53.633192 | orchestrator | 19:14:53.633 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-05-31 19:14:53.633229 | orchestrator | 19:14:53.633 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-31 19:14:53.633264 | orchestrator | 19:14:53.633 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-31 19:14:53.633301 | orchestrator | 19:14:53.633 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-31 19:14:53.633338 | orchestrator | 19:14:53.633 STDOUT terraform:  + all_tags = (known after apply) 2025-05-31 19:14:53.633375 | orchestrator | 19:14:53.633 STDOUT terraform:  + device_id = (known after apply) 2025-05-31 19:14:53.633412 | orchestrator | 19:14:53.633 STDOUT terraform:  + device_owner = (known after apply) 2025-05-31 19:14:53.633451 | orchestrator | 19:14:53.633 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-31 19:14:53.633487 | orchestrator | 19:14:53.633 STDOUT terraform:  + dns_name = (known after apply) 2025-05-31 19:14:53.633525 | orchestrator | 19:14:53.633 STDOUT terraform:  + id = (known after apply) 2025-05-31 19:14:53.633561 | orchestrator | 19:14:53.633 STDOUT terraform:  + mac_address = (known after apply) 2025-05-31 19:14:53.633620 | orchestrator | 19:14:53.633 STDOUT terraform:  + network_id = (known after apply) 2025-05-31 19:14:53.633656 | orchestrator | 19:14:53.633 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-31 19:14:53.633694 | orchestrator | 19:14:53.633 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-31 19:14:53.633730 | orchestrator | 19:14:53.633 STDOUT terraform:  + region = (known after apply) 2025-05-31 19:14:53.633767 | orchestrator | 19:14:53.633 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-31 19:14:53.633803 | orchestrator | 19:14:53.633 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-31 19:14:53.633820 | orchestrator | 19:14:53.633 STDOUT terraform:  + allowed_address_pairs { 2025-05-31 19:14:53.633850 | orchestrator | 19:14:53.633 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-31 19:14:53.633856 | orchestrator | 19:14:53.633 STDOUT terraform:  } 2025-05-31 19:14:53.633880 | orchestrator | 19:14:53.633 STDOUT terraform:  + allowed_address_pairs { 2025-05-31 19:14:53.633910 | orchestrator | 19:14:53.633 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-31 19:14:53.633916 | orchestrator | 19:14:53.633 STDOUT terraform:  } 2025-05-31 19:14:53.633944 | orchestrator | 19:14:53.633 STDOUT terraform:  + binding (known after apply) 2025-05-31 19:14:53.633950 | orchestrator | 19:14:53.633 STDOUT terraform:  + fixed_ip { 2025-05-31 19:14:53.633980 | orchestrator | 19:14:53.633 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-05-31 19:14:53.634010 | orchestrator | 19:14:53.633 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-31 19:14:53.634036 | orchestrator | 19:14:53.634 STDOUT terraform:  } 2025-05-31 19:14:53.634058 | orchestrator | 19:14:53.634 STDOUT terraform:  } 2025-05-31 19:14:53.634110 | orchestrator | 19:14:53.634 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-05-31 19:14:53.634156 | orchestrator | 19:14:53.634 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-31 19:14:53.634192 | orchestrator | 19:14:53.634 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-31 19:14:53.634231 | orchestrator | 19:14:53.634 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-31 19:14:53.634267 | orchestrator | 19:14:53.634 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-31 19:14:53.634306 | orchestrator | 19:14:53.634 STDOUT terraform:  + all_tags = (known after apply) 2025-05-31 19:14:53.634343 | orchestrator | 19:14:53.634 STDOUT terraform:  + device_id = (known after apply) 2025-05-31 19:14:53.634380 | orchestrator | 19:14:53.634 STDOUT terraform:  + device_owner = (known after apply) 2025-05-31 19:14:53.634416 | orchestrator | 19:14:53.634 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-31 19:14:53.634453 | orchestrator | 19:14:53.634 STDOUT terraform:  + dns_name = (known after apply) 2025-05-31 19:14:53.634492 | orchestrator | 19:14:53.634 STDOUT terraform:  + id = (known after apply) 2025-05-31 19:14:53.634529 | orchestrator | 19:14:53.634 STDOUT terraform:  + mac_address = (known after apply) 2025-05-31 19:14:53.634567 | orchestrator | 19:14:53.634 STDOUT terraform:  + network_id = (known after apply) 2025-05-31 19:14:53.634624 | orchestrator | 19:14:53.634 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-31 19:14:53.634667 | orchestrator | 19:14:53.634 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-31 19:14:53.634706 | orchestrator | 19:14:53.634 STDOUT terraform:  + region = (known after apply) 2025-05-31 19:14:53.634744 | orchestrator | 19:14:53.634 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-31 19:14:53.634780 | orchestrator | 19:14:53.634 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-31 19:14:53.634797 | orchestrator | 19:14:53.634 STDOUT terraform:  + allowed_address_pairs { 2025-05-31 19:14:53.634828 | orchestrator | 19:14:53.634 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-31 19:14:53.634835 | orchestrator | 19:14:53.634 STDOUT terraform:  } 2025-05-31 19:14:53.634885 | orchestrator | 19:14:53.634 STDOUT terraform:  + allowed_address_pairs { 2025-05-31 19:14:53.634917 | orchestrator | 19:14:53.634 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-31 19:14:53.634923 | orchestrator | 19:14:53.634 STDOUT terraform:  } 2025-05-31 19:14:53.634949 | orchestrator | 19:14:53.634 STDOUT terraform:  + allowed_address_pairs { 2025-05-31 19:14:53.634980 | orchestrator | 19:14:53.634 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-31 19:14:53.634986 | orchestrator | 19:14:53.634 STDOUT terraform:  } 2025-05-31 19:14:53.635010 | orchestrator | 19:14:53.634 STDOUT terraform:  + allowed_address_pairs { 2025-05-31 19:14:53.635040 | orchestrator | 19:14:53.635 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-31 19:14:53.635046 | orchestrator | 19:14:53.635 STDOUT terraform:  } 2025-05-31 19:14:53.635105 | orchestrator | 19:14:53.635 STDOUT terraform:  + binding (known after apply) 2025-05-31 19:14:53.635112 | orchestrator | 19:14:53.635 STDOUT terraform:  + fixed_ip { 2025-05-31 19:14:53.635144 | orchestrator | 19:14:53.635 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-05-31 19:14:53.635176 | orchestrator | 19:14:53.635 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-31 19:14:53.635182 | orchestrator | 19:14:53.635 STDOUT terraform:  } 2025-05-31 19:14:53.635198 | orchestrator | 19:14:53.635 STDOUT terraform:  } 2025-05-31 19:14:53.635246 | orchestrator | 19:14:53.635 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-05-31 19:14:53.635293 | orchestrator | 19:14:53.635 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-31 19:14:53.635330 | orchestrator | 19:14:53.635 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-31 19:14:53.635369 | orchestrator | 19:14:53.635 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-31 19:14:53.635405 | orchestrator | 19:14:53.635 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-31 19:14:53.635444 | orchestrator | 19:14:53.635 STDOUT terraform:  + all_tags = (known after apply) 2025-05-31 19:14:53.635479 | orchestrator | 19:14:53.635 STDOUT terraform:  + device_id = (known after apply) 2025-05-31 19:14:53.635515 | orchestrator | 19:14:53.635 STDOUT terraform:  + device_owner = (known after apply) 2025-05-31 19:14:53.635554 | orchestrator | 19:14:53.635 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-31 19:14:53.635607 | orchestrator | 19:14:53.635 STDOUT terraform:  + dns_name = (known after apply) 2025-05-31 19:14:53.635652 | orchestrator | 19:14:53.635 STDOUT terraform:  + id = (known after apply) 2025-05-31 19:14:53.635690 | orchestrator | 19:14:53.635 STDOUT terraform:  + mac_address = (known after apply) 2025-05-31 19:14:53.635727 | orchestrator | 19:14:53.635 STDOUT terraform:  + network_id = (known after apply) 2025-05-31 19:14:53.635762 | orchestrator | 19:14:53.635 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-31 19:14:53.635799 | orchestrator | 19:14:53.635 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-31 19:14:53.635836 | orchestrator | 19:14:53.635 STDOUT terraform:  + region = (known after apply) 2025-05-31 19:14:53.635873 | orchestrator | 19:14:53.635 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-31 19:14:53.635909 | orchestrator | 19:14:53.635 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-31 19:14:53.635926 | orchestrator | 19:14:53.635 STDOUT terraform:  + allowed_address_pairs { 2025-05-31 19:14:53.635956 | orchestrator | 19:14:53.635 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-31 19:14:53.635962 | orchestrator | 19:14:53.635 STDOUT terraform:  } 2025-05-31 19:14:53.635989 | orchestrator | 19:14:53.635 STDOUT terraform:  + allowed_address_pairs { 2025-05-31 19:14:53.636020 | orchestrator | 19:14:53.635 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-31 19:14:53.636026 | orchestrator | 19:14:53.636 STDOUT terraform:  } 2025-05-31 19:14:53.636051 | orchestrator | 19:14:53.636 STDOUT terraform:  + allowed_address_pairs { 2025-05-31 19:14:53.636083 | orchestrator | 19:14:53.636 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-31 19:14:53.636089 | orchestrator | 19:14:53.636 STDOUT terraform:  } 2025-05-31 19:14:53.636114 | orchestrator | 19:14:53.636 STDOUT terraform:  + allowed_address_pairs { 2025-05-31 19:14:53.636143 | orchestrator | 19:14:53.636 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-31 19:14:53.636149 | orchestrator | 19:14:53.636 STDOUT terraform:  } 2025-05-31 19:14:53.636178 | orchestrator | 19:14:53.636 STDOUT terraform:  + binding (known after apply) 2025-05-31 19:14:53.636184 | orchestrator | 19:14:53.636 STDOUT terraform:  + fixed_ip { 2025-05-31 19:14:53.636216 | orchestrator | 19:14:53.636 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-05-31 19:14:53.636223 | orchestrator | 19:14:53.636 STDOUT terraform:  + 2025-05-31 19:14:53.636301 | orchestrator | 19:14:53.636 STDOUT terraform: subnet_id = (known after apply) 2025-05-31 19:14:53.636312 | orchestrator | 19:14:53.636 STDOUT terraform:  } 2025-05-31 19:14:53.636317 | orchestrator | 19:14:53.636 STDOUT terraform:  } 2025-05-31 19:14:53.636373 | orchestrator | 19:14:53.636 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-05-31 19:14:53.636420 | orchestrator | 19:14:53.636 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-31 19:14:53.636458 | orchestrator | 19:14:53.636 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-31 19:14:53.636494 | orchestrator | 19:14:53.636 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-31 19:14:53.636532 | orchestrator | 19:14:53.636 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-31 19:14:53.636569 | orchestrator | 19:14:53.636 STDOUT terraform:  + all_tags = (known after apply) 2025-05-31 19:14:53.636620 | orchestrator | 19:14:53.636 STDOUT terraform:  + device_id = (known after apply) 2025-05-31 19:14:53.636657 | orchestrator | 19:14:53.636 STDOUT terraform:  + device_owner = (known after apply) 2025-05-31 19:14:53.636693 | orchestrator | 19:14:53.636 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-31 19:14:53.636731 | orchestrator | 19:14:53.636 STDOUT terraform:  + dns_name = (known after apply) 2025-05-31 19:14:53.636770 | orchestrator | 19:14:53.636 STDOUT terraform:  + id = (known after apply) 2025-05-31 19:14:53.636806 | orchestrator | 19:14:53.636 STDOUT terraform:  + mac_address = (known after apply) 2025-05-31 19:14:53.636845 | orchestrator | 19:14:53.636 STDOUT terraform:  + network_id = (known after apply) 2025-05-31 19:14:53.636881 | orchestrator | 19:14:53.636 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-31 19:14:53.636918 | orchestrator | 19:14:53.636 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-31 19:14:53.636955 | orchestrator | 19:14:53.636 STDOUT terraform:  + region = (known after apply) 2025-05-31 19:14:53.636993 | orchestrator | 19:14:53.636 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-31 19:14:53.637029 | orchestrator | 19:14:53.636 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-31 19:14:53.637045 | orchestrator | 19:14:53.637 STDOUT terraform:  + allowed_address_pairs { 2025-05-31 19:14:53.637077 | orchestrator | 19:14:53.637 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-31 19:14:53.637083 | orchestrator | 19:14:53.637 STDOUT terraform:  } 2025-05-31 19:14:53.637106 | orchestrator | 19:14:53.637 STDOUT terraform:  + allowed_address_pairs { 2025-05-31 19:14:53.637136 | orchestrator | 19:14:53.637 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-31 19:14:53.637142 | orchestrator | 19:14:53.637 STDOUT terraform:  } 2025-05-31 19:14:53.637167 | orchestrator | 19:14:53.637 STDOUT terraform:  + allowed_address_pairs { 2025-05-31 19:14:53.637196 | orchestrator | 19:14:53.637 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-31 19:14:53.637202 | orchestrator | 19:14:53.637 STDOUT terraform:  } 2025-05-31 19:14:53.637226 | orchestrator | 19:14:53.637 STDOUT terraform:  + allowed_address_pairs { 2025-05-31 19:14:53.637256 | orchestrator | 19:14:53.637 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-31 19:14:53.637262 | orchestrator | 19:14:53.637 STDOUT terraform:  } 2025-05-31 19:14:53.637292 | orchestrator | 19:14:53.637 STDOUT terraform:  + binding (known after apply) 2025-05-31 19:14:53.637298 | orchestrator | 19:14:53.637 STDOUT terraform:  + fixed_ip { 2025-05-31 19:14:53.637328 | orchestrator | 19:14:53.637 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-05-31 19:14:53.637372 | orchestrator | 19:14:53.637 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-31 19:14:53.637378 | orchestrator | 19:14:53.637 STDOUT terraform:  } 2025-05-31 19:14:53.637400 | orchestrator | 19:14:53.637 STDOUT terraform:  } 2025-05-31 19:14:53.637443 | orchestrator | 19:14:53.637 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-05-31 19:14:53.637488 | orchestrator | 19:14:53.637 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-31 19:14:53.637524 | orchestrator | 19:14:53.637 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-31 19:14:53.637560 | orchestrator | 19:14:53.637 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-31 19:14:53.637620 | orchestrator | 19:14:53.637 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-31 19:14:53.637658 | orchestrator | 19:14:53.637 STDOUT terraform:  + all_tags = (known after apply) 2025-05-31 19:14:53.637696 | orchestrator | 19:14:53.637 STDOUT terraform:  + device_id = (known after apply) 2025-05-31 19:14:53.637732 | orchestrator | 19:14:53.637 STDOUT terraform:  + device_owner = (known after apply) 2025-05-31 19:14:53.637771 | orchestrator | 19:14:53.637 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-31 19:14:53.637809 | orchestrator | 19:14:53.637 STDOUT terraform:  + dns_name = (known after apply) 2025-05-31 19:14:53.637845 | orchestrator | 19:14:53.637 STDOUT terraform:  + id = (known after apply) 2025-05-31 19:14:53.637882 | orchestrator | 19:14:53.637 STDOUT terraform:  + mac_address = (known after apply) 2025-05-31 19:14:53.637919 | orchestrator | 19:14:53.637 STDOUT terraform:  + network_id = (known after apply) 2025-05-31 19:14:53.637954 | orchestrator | 19:14:53.637 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-31 19:14:53.637991 | orchestrator | 19:14:53.637 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-31 19:14:53.638044 | orchestrator | 19:14:53.637 STDOUT terraform:  + region = (known after apply) 2025-05-31 19:14:53.638081 | orchestrator | 19:14:53.638 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-31 19:14:53.638118 | orchestrator | 19:14:53.638 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-31 19:14:53.638139 | orchestrator | 19:14:53.638 STDOUT terraform:  + allowed_address_pairs { 2025-05-31 19:14:53.638169 | orchestrator | 19:14:53.638 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-31 19:14:53.638175 | orchestrator | 19:14:53.638 STDOUT terraform:  } 2025-05-31 19:14:53.638199 | orchestrator | 19:14:53.638 STDOUT terraform:  + allowed_address_pairs { 2025-05-31 19:14:53.638229 | orchestrator | 19:14:53.638 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-31 19:14:53.638246 | orchestrator | 19:14:53.638 STDOUT terraform:  } 2025-05-31 19:14:53.638265 | orchestrator | 19:14:53.638 STDOUT terraform:  + allowed_address_pairs { 2025-05-31 19:14:53.638294 | orchestrator | 19:14:53.638 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-31 19:14:53.638301 | orchestrator | 19:14:53.638 STDOUT terraform:  } 2025-05-31 19:14:53.638326 | orchestrator | 19:14:53.638 STDOUT terraform:  + allowed_address_pairs { 2025-05-31 19:14:53.638355 | orchestrator | 19:14:53.638 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-31 19:14:53.638372 | orchestrator | 19:14:53.638 STDOUT terraform:  } 2025-05-31 19:14:53.638396 | orchestrator | 19:14:53.638 STDOUT terraform:  + binding (known after apply) 2025-05-31 19:14:53.638413 | orchestrator | 19:14:53.638 STDOUT terraform:  + fixed_ip { 2025-05-31 19:14:53.638437 | orchestrator | 19:14:53.638 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-05-31 19:14:53.638468 | orchestrator | 19:14:53.638 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-31 19:14:53.638484 | orchestrator | 19:14:53.638 STDOUT terraform:  } 2025-05-31 19:14:53.638490 | orchestrator | 19:14:53.638 STDOUT terraform:  } 2025-05-31 19:14:53.638542 | orchestrator | 19:14:53.638 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-05-31 19:14:53.638599 | orchestrator | 19:14:53.638 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-31 19:14:53.638649 | orchestrator | 19:14:53.638 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-31 19:14:53.638686 | orchestrator | 19:14:53.638 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-31 19:14:53.638721 | orchestrator | 19:14:53.638 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-31 19:14:53.638763 | orchestrator | 19:14:53.638 STDOUT terraform:  + all_tags = (known after apply) 2025-05-31 19:14:53.638797 | orchestrator | 19:14:53.638 STDOUT terraform:  + device_id = (known after apply) 2025-05-31 19:14:53.638833 | orchestrator | 19:14:53.638 STDOUT terraform:  + device_owner = (known after apply) 2025-05-31 19:14:53.638870 | orchestrator | 19:14:53.638 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-31 19:14:53.638908 | orchestrator | 19:14:53.638 STDOUT terraform:  + dns_name = (known after apply) 2025-05-31 19:14:53.638945 | orchestrator | 19:14:53.638 STDOUT terraform:  + id = (known after apply) 2025-05-31 19:14:53.638982 | orchestrator | 19:14:53.638 STDOUT terraform:  + mac_address = (known after apply) 2025-05-31 19:14:53.639019 | orchestrator | 19:14:53.638 STDOUT terraform:  + network_id = (known after apply) 2025-05-31 19:14:53.639056 | orchestrator | 19:14:53.639 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-31 19:14:53.639091 | orchestrator | 19:14:53.639 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-31 19:14:53.639131 | orchestrator | 19:14:53.639 STDOUT terraform:  + region = (known after apply) 2025-05-31 19:14:53.639165 | orchestrator | 19:14:53.639 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-31 19:14:53.639201 | orchestrator | 19:14:53.639 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-31 19:14:53.639222 | orchestrator | 19:14:53.639 STDOUT terraform:  + allowed_address_pairs { 2025-05-31 19:14:53.639253 | orchestrator | 19:14:53.639 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-31 19:14:53.639270 | orchestrator | 19:14:53.639 STDOUT terraform:  } 2025-05-31 19:14:53.639292 | orchestrator | 19:14:53.639 STDOUT terraform:  + allowed_address_pairs { 2025-05-31 19:14:53.639322 | orchestrator | 19:14:53.639 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-31 19:14:53.639339 | orchestrator | 19:14:53.639 STDOUT terraform:  } 2025-05-31 19:14:53.639359 | orchestrator | 19:14:53.639 STDOUT terraform:  + allowed_address_pairs { 2025-05-31 19:14:53.639387 | orchestrator | 19:14:53.639 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-31 19:14:53.639393 | orchestrator | 19:14:53.639 STDOUT terraform:  } 2025-05-31 19:14:53.639417 | orchestrator | 19:14:53.639 STDOUT terraform:  + allowed_address_pairs { 2025-05-31 19:14:53.639448 | orchestrator | 19:14:53.639 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-31 19:14:53.639465 | orchestrator | 19:14:53.639 STDOUT terraform:  } 2025-05-31 19:14:53.639489 | orchestrator | 19:14:53.639 STDOUT terraform:  + binding (known after apply) 2025-05-31 19:14:53.639496 | orchestrator | 19:14:53.639 STDOUT terraform:  + fixed_ip { 2025-05-31 19:14:53.639525 | orchestrator | 19:14:53.639 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-05-31 19:14:53.639555 | orchestrator | 19:14:53.639 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-31 19:14:53.639561 | orchestrator | 19:14:53.639 STDOUT terraform:  } 2025-05-31 19:14:53.639597 | orchestrator | 19:14:53.639 STDOUT terraform:  } 2025-05-31 19:14:53.639639 | orchestrator | 19:14:53.639 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-05-31 19:14:53.639685 | orchestrator | 19:14:53.639 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-31 19:14:53.639721 | orchestrator | 19:14:53.639 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-31 19:14:53.639757 | orchestrator | 19:14:53.639 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-31 19:14:53.639793 | orchestrator | 19:14:53.639 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-31 19:14:53.639831 | orchestrator | 19:14:53.639 STDOUT terraform:  + all_tags = (known after apply) 2025-05-31 19:14:53.639867 | orchestrator | 19:14:53.639 STDOUT terraform:  + device_id = (known after apply) 2025-05-31 19:14:53.639903 | orchestrator | 19:14:53.639 STDOUT terraform:  + device_owner = (known after apply) 2025-05-31 19:14:53.639940 | orchestrator | 19:14:53.639 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-31 19:14:53.639976 | orchestrator | 19:14:53.639 STDOUT terraform:  + dns_name = (known after apply) 2025-05-31 19:14:53.640015 | orchestrator | 19:14:53.639 STDOUT terraform:  + id = (known after apply) 2025-05-31 19:14:53.640050 | orchestrator | 19:14:53.640 STDOUT terraform:  + mac_address = (known after apply) 2025-05-31 19:14:53.640087 | orchestrator | 19:14:53.640 STDOUT terraform:  + network_id = (known after apply) 2025-05-31 19:14:53.640122 | orchestrator | 19:14:53.640 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-31 19:14:53.640159 | orchestrator | 19:14:53.640 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-31 19:14:53.640196 | orchestrator | 19:14:53.640 STDOUT terraform:  + region = (known after apply) 2025-05-31 19:14:53.640233 | orchestrator | 19:14:53.640 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-31 19:14:53.642125 | orchestrator | 19:14:53.640 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-31 19:14:53.642177 | orchestrator | 19:14:53.640 STDOUT terraform:  + allowed_address_pairs { 2025-05-31 19:14:53.642183 | orchestrator | 19:14:53.640 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-31 19:14:53.642188 | orchestrator | 19:14:53.640 STDOUT terraform:  } 2025-05-31 19:14:53.642192 | orchestrator | 19:14:53.640 STDOUT terraform:  + allowed_address_pairs { 2025-05-31 19:14:53.642196 | orchestrator | 19:14:53.640 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-31 19:14:53.642200 | orchestrator | 19:14:53.640 STDOUT terraform:  } 2025-05-31 19:14:53.642204 | orchestrator | 19:14:53.640 STDOUT terraform:  + allowed_address_pairs { 2025-05-31 19:14:53.642207 | orchestrator | 19:14:53.640 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-31 19:14:53.642211 | orchestrator | 19:14:53.640 STDOUT terraform:  } 2025-05-31 19:14:53.642215 | orchestrator | 19:14:53.640 STDOUT terraform:  + allowed_address_pairs { 2025-05-31 19:14:53.642218 | orchestrator | 19:14:53.640 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-31 19:14:53.642222 | orchestrator | 19:14:53.640 STDOUT terraform:  } 2025-05-31 19:14:53.642226 | orchestrator | 19:14:53.640 STDOUT terraform:  + binding (known after apply) 2025-05-31 19:14:53.642229 | orchestrator | 19:14:53.640 STDOUT terraform:  + fixed_ip { 2025-05-31 19:14:53.642233 | orchestrator | 19:14:53.640 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-05-31 19:14:53.642237 | orchestrator | 19:14:53.640 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-31 19:14:53.642240 | orchestrator | 19:14:53.640 STDOUT terraform:  } 2025-05-31 19:14:53.642244 | orchestrator | 19:14:53.640 STDOUT terraform:  } 2025-05-31 19:14:53.642248 | orchestrator | 19:14:53.640 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-05-31 19:14:53.642252 | orchestrator | 19:14:53.640 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-05-31 19:14:53.642256 | orchestrator | 19:14:53.640 STDOUT terraform:  + force_destroy = false 2025-05-31 19:14:53.642260 | orchestrator | 19:14:53.640 STDOUT terraform:  + id = (known after apply) 2025-05-31 19:14:53.642263 | orchestrator | 19:14:53.640 STDOUT terraform:  + port_id = (known after apply) 2025-05-31 19:14:53.642267 | orchestrator | 19:14:53.640 STDOUT terraform:  + region = (known after apply) 2025-05-31 19:14:53.642271 | orchestrator | 19:14:53.640 STDOUT terraform:  + router_id = (known after apply) 2025-05-31 19:14:53.642283 | orchestrator | 19:14:53.640 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-31 19:14:53.642287 | orchestrator | 19:14:53.640 STDOUT terraform:  } 2025-05-31 19:14:53.642291 | orchestrator | 19:14:53.640 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-05-31 19:14:53.642295 | orchestrator | 19:14:53.640 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-05-31 19:14:53.642298 | orchestrator | 19:14:53.640 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-31 19:14:53.642302 | orchestrator | 19:14:53.640 STDOUT terraform:  + all_tags = (known after apply) 2025-05-31 19:14:53.642306 | orchestrator | 19:14:53.640 STDOUT terraform:  + availability_zone_hints = [ 2025-05-31 19:14:53.642309 | orchestrator | 19:14:53.640 STDOUT terraform:  + "nova", 2025-05-31 19:14:53.642315 | orchestrator | 19:14:53.641 STDOUT terraform:  ] 2025-05-31 19:14:53.642318 | orchestrator | 19:14:53.641 STDOUT terraform:  + distributed = (known after apply) 2025-05-31 19:14:53.642322 | orchestrator | 19:14:53.641 STDOUT terraform:  + enable_snat = (known after apply) 2025-05-31 19:14:53.642326 | orchestrator | 19:14:53.641 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-05-31 19:14:53.642330 | orchestrator | 19:14:53.641 STDOUT terraform:  + id = (known after apply) 2025-05-31 19:14:53.642333 | orchestrator | 19:14:53.641 STDOUT terraform:  + name = "testbed" 2025-05-31 19:14:53.642345 | orchestrator | 19:14:53.641 STDOUT terraform:  + region = (known after apply) 2025-05-31 19:14:53.642349 | orchestrator | 19:14:53.641 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-31 19:14:53.642352 | orchestrator | 19:14:53.641 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-05-31 19:14:53.642356 | orchestrator | 19:14:53.641 STDOUT terraform:  } 2025-05-31 19:14:53.642360 | orchestrator | 19:14:53.641 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-05-31 19:14:53.642364 | orchestrator | 19:14:53.641 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-05-31 19:14:53.642368 | orchestrator | 19:14:53.641 STDOUT terraform:  + description = "ssh" 2025-05-31 19:14:53.642371 | orchestrator | 19:14:53.641 STDOUT terraform:  + direction = "ingress" 2025-05-31 19:14:53.642375 | orchestrator | 19:14:53.641 STDOUT terraform:  + ethertype = "IPv4" 2025-05-31 19:14:53.642379 | orchestrator | 19:14:53.641 STDOUT terraform:  + id = (known after apply) 2025-05-31 19:14:53.642383 | orchestrator | 19:14:53.641 STDOUT terraform:  + port_range_max = 22 2025-05-31 19:14:53.642386 | orchestrator | 19:14:53.641 STDOUT terraform:  + port_range_min = 22 2025-05-31 19:14:53.642390 | orchestrator | 19:14:53.641 STDOUT terraform:  + protocol = "tcp" 2025-05-31 19:14:53.642394 | orchestrator | 19:14:53.641 STDOUT terraform:  + region = (known after apply) 2025-05-31 19:14:53.642397 | orchestrator | 19:14:53.641 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-31 19:14:53.642405 | orchestrator | 19:14:53.641 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-31 19:14:53.642409 | orchestrator | 19:14:53.641 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-31 19:14:53.642412 | orchestrator | 19:14:53.641 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-31 19:14:53.642416 | orchestrator | 19:14:53.641 STDOUT terraform:  } 2025-05-31 19:14:53.642420 | orchestrator | 19:14:53.641 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-05-31 19:14:53.642424 | orchestrator | 19:14:53.641 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-05-31 19:14:53.642427 | orchestrator | 19:14:53.641 STDOUT terraform:  + description = "wireguard" 2025-05-31 19:14:53.642431 | orchestrator | 19:14:53.641 STDOUT terraform:  + direction = "ingress" 2025-05-31 19:14:53.642435 | orchestrator | 19:14:53.641 STDOUT terraform:  + ethertype = "IPv4" 2025-05-31 19:14:53.642438 | orchestrator | 19:14:53.641 STDOUT terraform:  + id = (known after apply) 2025-05-31 19:14:53.642442 | orchestrator | 19:14:53.641 STDOUT terraform:  + port_range_max = 51820 2025-05-31 19:14:53.642446 | orchestrator | 19:14:53.641 STDOUT terraform:  + port_range_min = 51820 2025-05-31 19:14:53.642450 | orchestrator | 19:14:53.641 STDOUT terraform:  + protocol = "udp" 2025-05-31 19:14:53.642453 | orchestrator | 19:14:53.641 STDOUT terraform:  + region = (known after apply) 2025-05-31 19:14:53.642457 | orchestrator | 19:14:53.641 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-31 19:14:53.642461 | orchestrator | 19:14:53.641 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-31 19:14:53.642465 | orchestrator | 19:14:53.641 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-31 19:14:53.642468 | orchestrator | 19:14:53.641 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-31 19:14:53.642475 | orchestrator | 19:14:53.642 STDOUT terraform:  } 2025-05-31 19:14:53.642479 | orchestrator | 19:14:53.642 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-05-31 19:14:53.642483 | orchestrator | 19:14:53.642 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-05-31 19:14:53.642487 | orchestrator | 19:14:53.642 STDOUT terraform:  + direction = "ingress" 2025-05-31 19:14:53.642495 | orchestrator | 19:14:53.642 STDOUT terraform:  + ethertype = "IPv4" 2025-05-31 19:14:53.642501 | orchestrator | 19:14:53.642 STDOUT terraform:  + id = (known after apply) 2025-05-31 19:14:53.642505 | orchestrator | 19:14:53.642 STDOUT terraform:  + protocol = "tcp" 2025-05-31 19:14:53.642508 | orchestrator | 19:14:53.642 STDOUT terraform:  + region = (known after apply) 2025-05-31 19:14:53.642512 | orchestrator | 19:14:53.642 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-31 19:14:53.642516 | orchestrator | 19:14:53.642 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-05-31 19:14:53.642520 | orchestrator | 19:14:53.642 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-31 19:14:53.642524 | orchestrator | 19:14:53.642 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-31 19:14:53.642530 | orchestrator | 19:14:53.642 STDOUT terraform:  } 2025-05-31 19:14:53.642534 | orchestrator | 19:14:53.642 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-05-31 19:14:53.642538 | orchestrator | 19:14:53.642 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-05-31 19:14:53.642542 | orchestrator | 19:14:53.642 STDOUT terraform:  + direction = "ingress" 2025-05-31 19:14:53.642547 | orchestrator | 19:14:53.642 STDOUT terraform:  + ethertype = "IPv4" 2025-05-31 19:14:53.642551 | orchestrator | 19:14:53.642 STDOUT terraform:  + id = (known after apply) 2025-05-31 19:14:53.642555 | orchestrator | 19:14:53.642 STDOUT terraform:  + protocol = "udp" 2025-05-31 19:14:53.642606 | orchestrator | 19:14:53.642 STDOUT terraform:  + region = (known after apply) 2025-05-31 19:14:53.642663 | orchestrator | 19:14:53.642 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-31 19:14:53.642689 | orchestrator | 19:14:53.642 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-05-31 19:14:53.642719 | orchestrator | 19:14:53.642 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-31 19:14:53.642750 | orchestrator | 19:14:53.642 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-31 19:14:53.642756 | orchestrator | 19:14:53.642 STDOUT terraform:  } 2025-05-31 19:14:53.642821 | orchestrator | 19:14:53.642 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-05-31 19:14:53.642867 | orchestrator | 19:14:53.642 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-05-31 19:14:53.642892 | orchestrator | 19:14:53.642 STDOUT terraform:  + direction = "ingress" 2025-05-31 19:14:53.642915 | orchestrator | 19:14:53.642 STDOUT terraform:  + ethertype = "IPv4" 2025-05-31 19:14:53.642947 | orchestrator | 19:14:53.642 STDOUT terraform:  + id = (known after apply) 2025-05-31 19:14:53.642970 | orchestrator | 19:14:53.642 STDOUT terraform:  + protocol = "icmp" 2025-05-31 19:14:53.643001 | orchestrator | 19:14:53.642 STDOUT terraform:  + region = (known after apply) 2025-05-31 19:14:53.643031 | orchestrator | 19:14:53.642 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-31 19:14:53.643057 | orchestrator | 19:14:53.643 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-31 19:14:53.643086 | orchestrator | 19:14:53.643 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-31 19:14:53.643117 | orchestrator | 19:14:53.643 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-31 19:14:53.643123 | orchestrator | 19:14:53.643 STDOUT terraform:  } 2025-05-31 19:14:53.643178 | orchestrator | 19:14:53.643 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-05-31 19:14:53.643236 | orchestrator | 19:14:53.643 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-05-31 19:14:53.643254 | orchestrator | 19:14:53.643 STDOUT terraform:  + direction = "ingress" 2025-05-31 19:14:53.643276 | orchestrator | 19:14:53.643 STDOUT terraform:  + ethertype = "IPv4" 2025-05-31 19:14:53.643307 | orchestrator | 19:14:53.643 STDOUT terraform:  + id = (known after apply) 2025-05-31 19:14:53.643332 | orchestrator | 19:14:53.643 STDOUT terraform:  + protocol = "tcp" 2025-05-31 19:14:53.643361 | orchestrator | 19:14:53.643 STDOUT terraform:  + region = (known after apply) 2025-05-31 19:14:53.643391 | orchestrator | 19:14:53.643 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-31 19:14:53.643417 | orchestrator | 19:14:53.643 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-31 19:14:53.643447 | orchestrator | 19:14:53.643 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-31 19:14:53.643553 | orchestrator | 19:14:53.643 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-31 19:14:53.643559 | orchestrator | 19:14:53.643 STDOUT terraform:  } 2025-05-31 19:14:53.643563 | orchestrator | 19:14:53.643 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-05-31 19:14:53.643605 | orchestrator | 19:14:53.643 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-05-31 19:14:53.643613 | orchestrator | 19:14:53.643 STDOUT terraform:  + direction = "ingress" 2025-05-31 19:14:53.643673 | orchestrator | 19:14:53.643 STDOUT terraform:  + ethertype = "IPv4" 2025-05-31 19:14:53.643678 | orchestrator | 19:14:53.643 STDOUT terraform:  + id = (known after apply) 2025-05-31 19:14:53.643684 | orchestrator | 19:14:53.643 STDOUT terraform:  + protocol = "udp" 2025-05-31 19:14:53.643714 | orchestrator | 19:14:53.643 STDOUT terraform:  + region = (known after apply) 2025-05-31 19:14:53.643744 | orchestrator | 19:14:53.643 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-31 19:14:53.643769 | orchestrator | 19:14:53.643 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-31 19:14:53.643801 | orchestrator | 19:14:53.643 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-31 19:14:53.643831 | orchestrator | 19:14:53.643 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-31 19:14:53.643838 | orchestrator | 19:14:53.643 STDOUT terraform:  } 2025-05-31 19:14:53.643891 | orchestrator | 19:14:53.643 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-05-31 19:14:53.643942 | orchestrator | 19:14:53.643 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-05-31 19:14:53.643967 | orchestrator | 19:14:53.643 STDOUT terraform:  + direction = "ingress" 2025-05-31 19:14:53.643989 | orchestrator | 19:14:53.643 STDOUT terraform:  + ethertype = "IPv4" 2025-05-31 19:14:53.644020 | orchestrator | 19:14:53.643 STDOUT terraform:  + id = (known after apply) 2025-05-31 19:14:53.644038 | orchestrator | 19:14:53.644 STDOUT terraform:  + protocol = "icmp" 2025-05-31 19:14:53.644070 | orchestrator | 19:14:53.644 STDOUT terraform:  + region = (known after apply) 2025-05-31 19:14:53.644101 | orchestrator | 19:14:53.644 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-31 19:14:53.644126 | orchestrator | 19:14:53.644 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-31 19:14:53.644159 | orchestrator | 19:14:53.644 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-31 19:14:53.644190 | orchestrator | 19:14:53.644 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-31 19:14:53.644196 | orchestrator | 19:14:53.644 STDOUT terraform:  } 2025-05-31 19:14:53.644249 | orchestrator | 19:14:53.644 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-05-31 19:14:53.644300 | orchestrator | 19:14:53.644 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-05-31 19:14:53.644317 | orchestrator | 19:14:53.644 STDOUT terraform:  + description = "vrrp" 2025-05-31 19:14:53.644343 | orchestrator | 19:14:53.644 STDOUT terraform:  + direction = "ingress" 2025-05-31 19:14:53.644360 | orchestrator | 19:14:53.644 STDOUT terraform:  + ethertype = "IPv4" 2025-05-31 19:14:53.644393 | orchestrator | 19:14:53.644 STDOUT terraform:  + id = (known after apply) 2025-05-31 19:14:53.644413 | orchestrator | 19:14:53.644 STDOUT terraform:  + protocol = "112" 2025-05-31 19:14:53.644441 | orchestrator | 19:14:53.644 STDOUT terraform:  + region = (known after apply) 2025-05-31 19:14:53.644471 | orchestrator | 19:14:53.644 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-31 19:14:53.644496 | orchestrator | 19:14:53.644 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-31 19:14:53.644527 | orchestrator | 19:14:53.644 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-31 19:14:53.644558 | orchestrator | 19:14:53.644 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-31 19:14:53.644564 | orchestrator | 19:14:53.644 STDOUT terraform:  } 2025-05-31 19:14:53.644667 | orchestrator | 19:14:53.644 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-05-31 19:14:53.644694 | orchestrator | 19:14:53.644 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-05-31 19:14:53.644724 | orchestrator | 19:14:53.644 STDOUT terraform:  + all_tags = (known after apply) 2025-05-31 19:14:53.644758 | orchestrator | 19:14:53.644 STDOUT terraform:  + description = "management security group" 2025-05-31 19:14:53.644787 | orchestrator | 19:14:53.644 STDOUT terraform:  + id = (known after apply) 2025-05-31 19:14:53.644817 | orchestrator | 19:14:53.644 STDOUT terraform:  + name = "testbed-management" 2025-05-31 19:14:53.644846 | orchestrator | 19:14:53.644 STDOUT terraform:  + region = (known after apply) 2025-05-31 19:14:53.644874 | orchestrator | 19:14:53.644 STDOUT terraform:  + stateful = (known after apply) 2025-05-31 19:14:53.644904 | orchestrator | 19:14:53.644 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-31 19:14:53.644910 | orchestrator | 19:14:53.644 STDOUT terraform:  } 2025-05-31 19:14:53.644959 | orchestrator | 19:14:53.644 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-05-31 19:14:53.645005 | orchestrator | 19:14:53.644 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-05-31 19:14:53.645044 | orchestrator | 19:14:53.645 STDOUT terraform:  + all_tags = (known after apply) 2025-05-31 19:14:53.645074 | orchestrator | 19:14:53.645 STDOUT terraform:  + description = "node security group" 2025-05-31 19:14:53.645103 | orchestrator | 19:14:53.645 STDOUT terraform:  + id = (known after apply) 2025-05-31 19:14:53.645130 | orchestrator | 19:14:53.645 STDOUT terraform:  + name = "testbed-node" 2025-05-31 19:14:53.645159 | orchestrator | 19:14:53.645 STDOUT terraform:  + region = (known after apply) 2025-05-31 19:14:53.645188 | orchestrator | 19:14:53.645 STDOUT terraform:  + stateful = (known after apply) 2025-05-31 19:14:53.645216 | orchestrator | 19:14:53.645 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-31 19:14:53.645222 | orchestrator | 19:14:53.645 STDOUT terraform:  } 2025-05-31 19:14:53.645270 | orchestrator | 19:14:53.645 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-05-31 19:14:53.645315 | orchestrator | 19:14:53.645 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-05-31 19:14:53.645346 | orchestrator | 19:14:53.645 STDOUT terraform:  + all_tags = (known after apply) 2025-05-31 19:14:53.645378 | orchestrator | 19:14:53.645 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-05-31 19:14:53.645394 | orchestrator | 19:14:53.645 STDOUT terraform:  + dns_nameservers = [ 2025-05-31 19:14:53.645416 | orchestrator | 19:14:53.645 STDOUT terraform:  + "8.8.8.8", 2025-05-31 19:14:53.645422 | orchestrator | 19:14:53.645 STDOUT terraform:  + "9.9.9.9", 2025-05-31 19:14:53.645438 | orchestrator | 19:14:53.645 STDOUT terraform:  ] 2025-05-31 19:14:53.645454 | orchestrator | 19:14:53.645 STDOUT terraform:  + enable_dhcp = true 2025-05-31 19:14:53.645489 | orchestrator | 19:14:53.645 STDOUT terraform:  + gateway_ip = (known after apply) 2025-05-31 19:14:53.645519 | orchestrator | 19:14:53.645 STDOUT terraform:  + id = (known after apply) 2025-05-31 19:14:53.645529 | orchestrator | 19:14:53.645 STDOUT terraform:  + ip_version = 4 2025-05-31 19:14:53.645563 | orchestrator | 19:14:53.645 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-05-31 19:14:53.645611 | orchestrator | 19:14:53.645 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-05-31 19:14:53.645649 | orchestrator | 19:14:53.645 STDOUT terraform:  + name = "subnet-testbed-management" 2025-05-31 19:14:53.645680 | orchestrator | 19:14:53.645 STDOUT terraform:  + network_id = (known after apply) 2025-05-31 19:14:53.645697 | orchestrator | 19:14:53.645 STDOUT terraform:  + no_gateway = false 2025-05-31 19:14:53.645728 | orchestrator | 19:14:53.645 STDOUT terraform:  + region = (known after apply) 2025-05-31 19:14:53.645758 | orchestrator | 19:14:53.645 STDOUT terraform:  + service_types = (known after apply) 2025-05-31 19:14:53.645789 | orchestrator | 19:14:53.645 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-31 19:14:53.645806 | orchestrator | 19:14:53.645 STDOUT terraform:  + allocation_pool { 2025-05-31 19:14:53.645830 | orchestrator | 19:14:53.645 STDOUT terraform:  + end = "192.168.31.250" 2025-05-31 19:14:53.645854 | orchestrator | 19:14:53.645 STDOUT terraform:  + start = "192.168.31.200" 2025-05-31 19:14:53.645860 | orchestrator | 19:14:53.645 STDOUT terraform:  } 2025-05-31 19:14:53.645876 | orchestrator | 19:14:53.645 STDOUT terraform:  } 2025-05-31 19:14:53.645896 | orchestrator | 19:14:53.645 STDOUT terraform:  # terraform_data.image will be created 2025-05-31 19:14:53.645947 | orchestrator | 19:14:53.645 STDOUT terraform:  + resource "terraform_data" "image" { 2025-05-31 19:14:53.645954 | orchestrator | 19:14:53.645 STDOUT terraform:  + id = (known after apply) 2025-05-31 19:14:53.645958 | orchestrator | 19:14:53.645 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-05-31 19:14:53.645982 | orchestrator | 19:14:53.645 STDOUT terraform:  + output = (known after apply) 2025-05-31 19:14:53.645988 | orchestrator | 19:14:53.645 STDOUT terraform:  } 2025-05-31 19:14:53.646022 | orchestrator | 19:14:53.645 STDOUT terraform:  # terraform_data.image_node will be created 2025-05-31 19:14:53.646063 | orchestrator | 19:14:53.646 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-05-31 19:14:53.646088 | orchestrator | 19:14:53.646 STDOUT terraform:  + id = (known after apply) 2025-05-31 19:14:53.646110 | orchestrator | 19:14:53.646 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-05-31 19:14:53.646135 | orchestrator | 19:14:53.646 STDOUT terraform:  + output = (known after apply) 2025-05-31 19:14:53.646141 | orchestrator | 19:14:53.646 STDOUT terraform:  } 2025-05-31 19:14:53.646174 | orchestrator | 19:14:53.646 STDOUT terraform: Plan: 64 to add, 0 to change, 0 to destroy. 2025-05-31 19:14:53.646181 | orchestrator | 19:14:53.646 STDOUT terraform: Changes to Outputs: 2025-05-31 19:14:53.646210 | orchestrator | 19:14:53.646 STDOUT terraform:  + manager_address = (sensitive value) 2025-05-31 19:14:53.646237 | orchestrator | 19:14:53.646 STDOUT terraform:  + private_key = (sensitive value) 2025-05-31 19:14:53.885073 | orchestrator | 19:14:53.884 STDOUT terraform: terraform_data.image_node: Creating... 2025-05-31 19:14:53.890759 | orchestrator | 19:14:53.886 STDOUT terraform: terraform_data.image: Creating... 2025-05-31 19:14:53.890817 | orchestrator | 19:14:53.886 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=6f99ee97-ed04-bb63-7f04-36c2e1411dad] 2025-05-31 19:14:53.890830 | orchestrator | 19:14:53.889 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=5035df0e-be31-4b41-ed97-630822b034df] 2025-05-31 19:14:53.903653 | orchestrator | 19:14:53.903 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-05-31 19:14:53.905996 | orchestrator | 19:14:53.905 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-05-31 19:14:53.913822 | orchestrator | 19:14:53.913 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-05-31 19:14:53.914998 | orchestrator | 19:14:53.914 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-05-31 19:14:53.915036 | orchestrator | 19:14:53.914 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-05-31 19:14:53.917737 | orchestrator | 19:14:53.917 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-05-31 19:14:53.917771 | orchestrator | 19:14:53.917 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-05-31 19:14:53.917781 | orchestrator | 19:14:53.917 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-05-31 19:14:53.917845 | orchestrator | 19:14:53.917 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-05-31 19:14:53.918498 | orchestrator | 19:14:53.918 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-05-31 19:14:54.346762 | orchestrator | 19:14:54.346 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 0s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-05-31 19:14:54.351770 | orchestrator | 19:14:54.351 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 0s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-05-31 19:14:54.362383 | orchestrator | 19:14:54.362 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-05-31 19:14:54.363960 | orchestrator | 19:14:54.363 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-05-31 19:14:54.417364 | orchestrator | 19:14:54.417 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2025-05-31 19:14:54.425534 | orchestrator | 19:14:54.425 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-05-31 19:15:00.003655 | orchestrator | 19:15:00.001 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 6s [id=4e4149cd-e622-4432-b1e8-1efe00fc96ee] 2025-05-31 19:15:00.017172 | orchestrator | 19:15:00.016 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-05-31 19:15:03.914534 | orchestrator | 19:15:03.914 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Still creating... [10s elapsed] 2025-05-31 19:15:03.915502 | orchestrator | 19:15:03.915 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Still creating... [10s elapsed] 2025-05-31 19:15:03.915657 | orchestrator | 19:15:03.915 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Still creating... [10s elapsed] 2025-05-31 19:15:03.916731 | orchestrator | 19:15:03.916 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Still creating... [10s elapsed] 2025-05-31 19:15:03.917850 | orchestrator | 19:15:03.917 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Still creating... [10s elapsed] 2025-05-31 19:15:03.920108 | orchestrator | 19:15:03.919 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Still creating... [10s elapsed] 2025-05-31 19:15:04.363152 | orchestrator | 19:15:04.362 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Still creating... [10s elapsed] 2025-05-31 19:15:04.365191 | orchestrator | 19:15:04.364 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Still creating... [10s elapsed] 2025-05-31 19:15:04.426734 | orchestrator | 19:15:04.426 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Still creating... [10s elapsed] 2025-05-31 19:15:04.486775 | orchestrator | 19:15:04.486 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 10s [id=fb66f732-34d2-45e3-b1b8-d9ba2a3ac758] 2025-05-31 19:15:04.493535 | orchestrator | 19:15:04.493 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-05-31 19:15:04.496218 | orchestrator | 19:15:04.495 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 10s [id=d4f6392d-f8e1-4809-8c10-779f08f2c642] 2025-05-31 19:15:04.502633 | orchestrator | 19:15:04.502 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-05-31 19:15:04.513888 | orchestrator | 19:15:04.513 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 11s [id=5a7b16a5-b25a-49dc-b8e1-bfe6cbb00610] 2025-05-31 19:15:04.519792 | orchestrator | 19:15:04.519 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 11s [id=191d8892-ecee-415a-8f71-2d93b7558573] 2025-05-31 19:15:04.522534 | orchestrator | 19:15:04.522 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-05-31 19:15:04.527785 | orchestrator | 19:15:04.527 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 11s [id=1a9ee9a4-914c-40fd-b835-c38474fb60e8] 2025-05-31 19:15:04.529996 | orchestrator | 19:15:04.529 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 11s [id=6d52f885-97ca-45c7-bd6a-7862e27ed465] 2025-05-31 19:15:04.535832 | orchestrator | 19:15:04.535 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-05-31 19:15:04.537220 | orchestrator | 19:15:04.537 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-05-31 19:15:04.539117 | orchestrator | 19:15:04.538 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-05-31 19:15:04.566492 | orchestrator | 19:15:04.566 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 11s [id=a9241271-625e-4229-94b1-3d99bba363ae] 2025-05-31 19:15:04.583464 | orchestrator | 19:15:04.583 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-05-31 19:15:04.588125 | orchestrator | 19:15:04.587 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 11s [id=9b14a296-0b0f-456e-ac69-f453c0a27a39] 2025-05-31 19:15:04.590748 | orchestrator | 19:15:04.590 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=0999d756c18b516115e8202836e69cfdc9cdf71b] 2025-05-31 19:15:04.595968 | orchestrator | 19:15:04.595 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-05-31 19:15:04.599426 | orchestrator | 19:15:04.599 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-05-31 19:15:04.604487 | orchestrator | 19:15:04.604 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=2aa10bae47e2e27da515d700754350374b6caa14] 2025-05-31 19:15:04.628214 | orchestrator | 19:15:04.627 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 11s [id=727d26bd-0ead-422c-920c-32fac6429b39] 2025-05-31 19:15:10.020173 | orchestrator | 19:15:10.019 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Still creating... [10s elapsed] 2025-05-31 19:15:10.336759 | orchestrator | 19:15:10.336 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 10s [id=8bbc83dc-8efd-4101-bb1b-0d6e4523fdf4] 2025-05-31 19:15:10.444951 | orchestrator | 19:15:10.444 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 5s [id=fb70611f-bdb2-4ef1-9eea-876c3e844e8b] 2025-05-31 19:15:10.455731 | orchestrator | 19:15:10.455 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-05-31 19:15:14.495190 | orchestrator | 19:15:14.494 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Still creating... [10s elapsed] 2025-05-31 19:15:14.503423 | orchestrator | 19:15:14.503 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Still creating... [10s elapsed] 2025-05-31 19:15:14.527086 | orchestrator | 19:15:14.526 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Still creating... [10s elapsed] 2025-05-31 19:15:14.536334 | orchestrator | 19:15:14.536 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Still creating... [10s elapsed] 2025-05-31 19:15:14.538680 | orchestrator | 19:15:14.538 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Still creating... [10s elapsed] 2025-05-31 19:15:14.539782 | orchestrator | 19:15:14.539 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Still creating... [10s elapsed] 2025-05-31 19:15:14.926885 | orchestrator | 19:15:14.926 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 10s [id=6c9cfe99-95af-4c67-bff1-48d0dfa5ccfe] 2025-05-31 19:15:14.935692 | orchestrator | 19:15:14.935 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 10s [id=525ad027-7e06-4e85-bfbd-c3ec419229c5] 2025-05-31 19:15:14.939210 | orchestrator | 19:15:14.938 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 10s [id=00ca6f0b-c95a-490d-9c88-84cc0dbef80d] 2025-05-31 19:15:14.950743 | orchestrator | 19:15:14.950 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 10s [id=82edc559-ec05-4620-85e0-00512a69f475] 2025-05-31 19:15:14.962234 | orchestrator | 19:15:14.961 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 11s [id=fbeeb0f2-d22c-4c2b-ae21-9c150f637ac0] 2025-05-31 19:15:14.964997 | orchestrator | 19:15:14.964 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 10s [id=16871094-562d-4048-9a10-7a67b3b2dad2] 2025-05-31 19:15:17.898301 | orchestrator | 19:15:17.897 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 8s [id=a83ff138-32d7-4d45-accf-fcde47bcbe02] 2025-05-31 19:15:17.904688 | orchestrator | 19:15:17.904 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-05-31 19:15:17.905928 | orchestrator | 19:15:17.905 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-05-31 19:15:17.906257 | orchestrator | 19:15:17.906 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-05-31 19:15:18.138423 | orchestrator | 19:15:18.136 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=6a6cc5c2-f05f-4b88-9d7e-28ce67ab891d] 2025-05-31 19:15:18.153153 | orchestrator | 19:15:18.152 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=5fd894b1-622a-404d-8df6-d892a3130158] 2025-05-31 19:15:18.154903 | orchestrator | 19:15:18.154 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-05-31 19:15:18.165844 | orchestrator | 19:15:18.165 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-05-31 19:15:18.166290 | orchestrator | 19:15:18.166 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-05-31 19:15:18.168774 | orchestrator | 19:15:18.168 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-05-31 19:15:18.169621 | orchestrator | 19:15:18.169 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-05-31 19:15:18.170485 | orchestrator | 19:15:18.170 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-05-31 19:15:18.171552 | orchestrator | 19:15:18.171 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-05-31 19:15:18.176322 | orchestrator | 19:15:18.176 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-05-31 19:15:18.178277 | orchestrator | 19:15:18.178 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-05-31 19:15:18.325491 | orchestrator | 19:15:18.325 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 0s [id=33f3e6ee-85d1-475d-b17c-e16bcd77cba9] 2025-05-31 19:15:18.344704 | orchestrator | 19:15:18.344 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-05-31 19:15:18.576330 | orchestrator | 19:15:18.575 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=d6044a86-f63d-4669-9566-4463ffe7f096] 2025-05-31 19:15:18.585030 | orchestrator | 19:15:18.584 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-05-31 19:15:18.805886 | orchestrator | 19:15:18.805 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=9c6c7b5b-ada2-42e5-afa3-3512de06d0dc] 2025-05-31 19:15:18.817792 | orchestrator | 19:15:18.817 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-05-31 19:15:18.985231 | orchestrator | 19:15:18.984 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=69b2cfa7-b95d-4a3d-976f-38793e766b90] 2025-05-31 19:15:18.989668 | orchestrator | 19:15:18.989 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-05-31 19:15:19.063973 | orchestrator | 19:15:19.063 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 0s [id=f273509b-4dd5-4bb3-9b5b-4cafbaa03c35] 2025-05-31 19:15:19.069744 | orchestrator | 19:15:19.069 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-05-31 19:15:19.242578 | orchestrator | 19:15:19.241 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 0s [id=53624f14-2ad3-4f16-bf40-0e31cfce0e93] 2025-05-31 19:15:19.254393 | orchestrator | 19:15:19.254 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-05-31 19:15:19.495726 | orchestrator | 19:15:19.495 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 0s [id=f71fa030-b88c-47f4-a082-d0a079d974fb] 2025-05-31 19:15:19.503275 | orchestrator | 19:15:19.503 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-05-31 19:15:19.659162 | orchestrator | 19:15:19.658 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=54017a3b-e9bc-46e4-b896-9bd9a1b4a64d] 2025-05-31 19:15:19.846861 | orchestrator | 19:15:19.846 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=3a4dfe94-2d85-455d-acf1-26b183f01880] 2025-05-31 19:15:23.840689 | orchestrator | 19:15:23.840 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 6s [id=f9ba7197-867d-4253-9019-e1706212b0e6] 2025-05-31 19:15:23.859019 | orchestrator | 19:15:23.858 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 6s [id=5077ea57-5eac-493d-9758-9ee65afdd45c] 2025-05-31 19:15:23.862453 | orchestrator | 19:15:23.862 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 6s [id=5e48f730-4dc7-4539-a441-a4e9bf612ccb] 2025-05-31 19:15:23.998983 | orchestrator | 19:15:23.998 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 6s [id=c4494f32-79c3-450d-8989-6e387509b694] 2025-05-31 19:15:24.047490 | orchestrator | 19:15:24.047 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 6s [id=0e87a94f-a575-429f-bba7-ff0686505779] 2025-05-31 19:15:24.107078 | orchestrator | 19:15:24.106 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 6s [id=1fdad407-5a05-46ac-b891-924f89da8f7e] 2025-05-31 19:15:24.987984 | orchestrator | 19:15:24.987 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 6s [id=ffd8c546-b409-458b-bdf3-86620c19fcd5] 2025-05-31 19:15:25.714564 | orchestrator | 19:15:25.714 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 8s [id=ec816621-54d6-4f92-8ce8-32418e2887ef] 2025-05-31 19:15:25.726103 | orchestrator | 19:15:25.725 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-05-31 19:15:25.744998 | orchestrator | 19:15:25.744 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-05-31 19:15:25.751418 | orchestrator | 19:15:25.751 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-05-31 19:15:25.760156 | orchestrator | 19:15:25.760 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-05-31 19:15:25.761086 | orchestrator | 19:15:25.761 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-05-31 19:15:25.763313 | orchestrator | 19:15:25.763 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-05-31 19:15:25.769805 | orchestrator | 19:15:25.769 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-05-31 19:15:32.801525 | orchestrator | 19:15:32.801 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 7s [id=866c1f89-26d5-4315-8143-09f1068f450f] 2025-05-31 19:15:32.813890 | orchestrator | 19:15:32.813 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-05-31 19:15:32.816017 | orchestrator | 19:15:32.815 STDOUT terraform: local_file.inventory: Creating... 2025-05-31 19:15:32.819049 | orchestrator | 19:15:32.818 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-05-31 19:15:32.820374 | orchestrator | 19:15:32.820 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=2cb4db1226e92cff5c006a6a7dfa14119b525880] 2025-05-31 19:15:32.826607 | orchestrator | 19:15:32.826 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=4c429ad14d10bd954fe8e97241961f9b7e5248e7] 2025-05-31 19:15:33.762468 | orchestrator | 19:15:33.762 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=866c1f89-26d5-4315-8143-09f1068f450f] 2025-05-31 19:15:35.750825 | orchestrator | 19:15:35.750 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-05-31 19:15:35.752593 | orchestrator | 19:15:35.752 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-05-31 19:15:35.761879 | orchestrator | 19:15:35.761 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-05-31 19:15:35.767199 | orchestrator | 19:15:35.766 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-05-31 19:15:35.768240 | orchestrator | 19:15:35.768 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-05-31 19:15:35.772526 | orchestrator | 19:15:35.772 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-05-31 19:15:45.751787 | orchestrator | 19:15:45.751 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-05-31 19:15:45.752838 | orchestrator | 19:15:45.752 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-05-31 19:15:45.763176 | orchestrator | 19:15:45.762 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-05-31 19:15:45.767388 | orchestrator | 19:15:45.767 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-05-31 19:15:45.768488 | orchestrator | 19:15:45.768 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-05-31 19:15:45.773937 | orchestrator | 19:15:45.773 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-05-31 19:15:55.752774 | orchestrator | 19:15:55.752 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2025-05-31 19:15:55.753766 | orchestrator | 19:15:55.753 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2025-05-31 19:15:55.763467 | orchestrator | 19:15:55.763 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2025-05-31 19:15:55.767876 | orchestrator | 19:15:55.767 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2025-05-31 19:15:55.768947 | orchestrator | 19:15:55.768 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2025-05-31 19:15:55.774235 | orchestrator | 19:15:55.773 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2025-05-31 19:16:05.753139 | orchestrator | 19:16:05.752 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [40s elapsed] 2025-05-31 19:16:05.754352 | orchestrator | 19:16:05.754 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [40s elapsed] 2025-05-31 19:16:05.763744 | orchestrator | 19:16:05.763 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [40s elapsed] 2025-05-31 19:16:05.768171 | orchestrator | 19:16:05.767 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [40s elapsed] 2025-05-31 19:16:05.769272 | orchestrator | 19:16:05.768 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [40s elapsed] 2025-05-31 19:16:05.774553 | orchestrator | 19:16:05.774 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [40s elapsed] 2025-05-31 19:16:06.299180 | orchestrator | 19:16:06.298 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 40s [id=8e58093d-9da0-49cd-aead-28ce24b4a112] 2025-05-31 19:16:07.002863 | orchestrator | 19:16:07.002 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 41s [id=5b468488-0725-4363-9f94-cbd4202078f4] 2025-05-31 19:16:15.754286 | orchestrator | 19:16:15.753 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [50s elapsed] 2025-05-31 19:16:15.755456 | orchestrator | 19:16:15.755 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [50s elapsed] 2025-05-31 19:16:15.765077 | orchestrator | 19:16:15.764 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [50s elapsed] 2025-05-31 19:16:15.769533 | orchestrator | 19:16:15.769 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [50s elapsed] 2025-05-31 19:16:16.288064 | orchestrator | 19:16:16.287 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 50s [id=36e5e747-d30d-476e-921a-68c582c61e2b] 2025-05-31 19:16:16.451237 | orchestrator | 19:16:16.450 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 50s [id=3a76626b-605c-4e6b-b54e-cf153c8f76b7] 2025-05-31 19:16:16.473935 | orchestrator | 19:16:16.473 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 50s [id=5ca45be6-e0c3-42d3-af33-c693195541f0] 2025-05-31 19:16:17.065219 | orchestrator | 19:16:17.064 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 51s [id=8c0b4621-5c7f-4ead-8495-97080d2aa47b] 2025-05-31 19:16:17.087146 | orchestrator | 19:16:17.086 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-05-31 19:16:17.095574 | orchestrator | 19:16:17.095 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=1528966851342482700] 2025-05-31 19:16:17.112522 | orchestrator | 19:16:17.110 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-05-31 19:16:17.112611 | orchestrator | 19:16:17.111 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-05-31 19:16:17.112617 | orchestrator | 19:16:17.112 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-05-31 19:16:17.113948 | orchestrator | 19:16:17.113 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-05-31 19:16:17.118115 | orchestrator | 19:16:17.116 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-05-31 19:16:17.118168 | orchestrator | 19:16:17.117 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-05-31 19:16:17.120970 | orchestrator | 19:16:17.120 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-05-31 19:16:17.129348 | orchestrator | 19:16:17.128 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-05-31 19:16:17.134914 | orchestrator | 19:16:17.134 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-05-31 19:16:17.142976 | orchestrator | 19:16:17.142 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-05-31 19:16:22.434928 | orchestrator | 19:16:22.434 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 5s [id=36e5e747-d30d-476e-921a-68c582c61e2b/5a7b16a5-b25a-49dc-b8e1-bfe6cbb00610] 2025-05-31 19:16:22.469533 | orchestrator | 19:16:22.469 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 5s [id=5b468488-0725-4363-9f94-cbd4202078f4/9b14a296-0b0f-456e-ac69-f453c0a27a39] 2025-05-31 19:16:22.471609 | orchestrator | 19:16:22.471 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 5s [id=8c0b4621-5c7f-4ead-8495-97080d2aa47b/d4f6392d-f8e1-4809-8c10-779f08f2c642] 2025-05-31 19:16:22.498168 | orchestrator | 19:16:22.497 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 5s [id=36e5e747-d30d-476e-921a-68c582c61e2b/fb66f732-34d2-45e3-b1b8-d9ba2a3ac758] 2025-05-31 19:16:22.508157 | orchestrator | 19:16:22.507 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 6s [id=8c0b4621-5c7f-4ead-8495-97080d2aa47b/727d26bd-0ead-422c-920c-32fac6429b39] 2025-05-31 19:16:22.526052 | orchestrator | 19:16:22.525 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 6s [id=5b468488-0725-4363-9f94-cbd4202078f4/1a9ee9a4-914c-40fd-b835-c38474fb60e8] 2025-05-31 19:16:22.537233 | orchestrator | 19:16:22.536 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 6s [id=36e5e747-d30d-476e-921a-68c582c61e2b/191d8892-ecee-415a-8f71-2d93b7558573] 2025-05-31 19:16:22.539241 | orchestrator | 19:16:22.538 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 6s [id=5b468488-0725-4363-9f94-cbd4202078f4/a9241271-625e-4229-94b1-3d99bba363ae] 2025-05-31 19:16:22.557115 | orchestrator | 19:16:22.556 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 6s [id=8c0b4621-5c7f-4ead-8495-97080d2aa47b/6d52f885-97ca-45c7-bd6a-7862e27ed465] 2025-05-31 19:16:27.144451 | orchestrator | 19:16:27.144 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-05-31 19:16:37.148407 | orchestrator | 19:16:37.146 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-05-31 19:16:37.475817 | orchestrator | 19:16:37.475 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=70c7802f-13b7-4e7b-96e1-aa99ec339366] 2025-05-31 19:16:37.675990 | orchestrator | 19:16:37.675 STDOUT terraform: Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2025-05-31 19:16:37.676085 | orchestrator | 19:16:37.675 STDOUT terraform: Outputs: 2025-05-31 19:16:37.676098 | orchestrator | 19:16:37.675 STDOUT terraform: manager_address = 2025-05-31 19:16:37.676106 | orchestrator | 19:16:37.676 STDOUT terraform: private_key = 2025-05-31 19:16:37.781630 | orchestrator | ok: Runtime: 0:01:53.102425 2025-05-31 19:16:37.809176 | 2025-05-31 19:16:37.809341 | TASK [Create infrastructure (stable)] 2025-05-31 19:16:38.346311 | orchestrator | skipping: Conditional result was False 2025-05-31 19:16:38.366114 | 2025-05-31 19:16:38.366308 | TASK [Fetch manager address] 2025-05-31 19:16:38.832940 | orchestrator | ok 2025-05-31 19:16:38.843195 | 2025-05-31 19:16:38.843342 | TASK [Set manager_host address] 2025-05-31 19:16:38.926966 | orchestrator | ok 2025-05-31 19:16:38.937189 | 2025-05-31 19:16:38.937333 | LOOP [Update ansible collections] 2025-05-31 19:16:39.880030 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-05-31 19:16:39.880436 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-05-31 19:16:39.880493 | orchestrator | Starting galaxy collection install process 2025-05-31 19:16:39.880529 | orchestrator | Process install dependency map 2025-05-31 19:16:39.880560 | orchestrator | Starting collection install process 2025-05-31 19:16:39.880590 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed04/.ansible/collections/ansible_collections/osism/commons' 2025-05-31 19:16:39.880629 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed04/.ansible/collections/ansible_collections/osism/commons 2025-05-31 19:16:39.880679 | orchestrator | osism.commons:999.0.0 was installed successfully 2025-05-31 19:16:39.880765 | orchestrator | ok: Item: commons Runtime: 0:00:00.620849 2025-05-31 19:16:40.763855 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-05-31 19:16:40.764070 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-05-31 19:16:40.764134 | orchestrator | Starting galaxy collection install process 2025-05-31 19:16:40.764182 | orchestrator | Process install dependency map 2025-05-31 19:16:40.764227 | orchestrator | Starting collection install process 2025-05-31 19:16:40.764269 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed04/.ansible/collections/ansible_collections/osism/services' 2025-05-31 19:16:40.764310 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed04/.ansible/collections/ansible_collections/osism/services 2025-05-31 19:16:40.764348 | orchestrator | osism.services:999.0.0 was installed successfully 2025-05-31 19:16:40.764408 | orchestrator | ok: Item: services Runtime: 0:00:00.617283 2025-05-31 19:16:40.782717 | 2025-05-31 19:16:40.782989 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-05-31 19:16:51.385140 | orchestrator | ok 2025-05-31 19:16:51.394282 | 2025-05-31 19:16:51.394399 | TASK [Wait a little longer for the manager so that everything is ready] 2025-05-31 19:17:51.439702 | orchestrator | ok 2025-05-31 19:17:51.450353 | 2025-05-31 19:17:51.450488 | TASK [Fetch manager ssh hostkey] 2025-05-31 19:17:53.031464 | orchestrator | Output suppressed because no_log was given 2025-05-31 19:17:53.047217 | 2025-05-31 19:17:53.047430 | TASK [Get ssh keypair from terraform environment] 2025-05-31 19:17:53.595214 | orchestrator | ok: Runtime: 0:00:00.006970 2025-05-31 19:17:53.610500 | 2025-05-31 19:17:53.610673 | TASK [Point out that the following task takes some time and does not give any output] 2025-05-31 19:17:53.659726 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-05-31 19:17:53.671226 | 2025-05-31 19:17:53.671376 | TASK [Run manager part 0] 2025-05-31 19:17:54.762474 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-05-31 19:17:54.949685 | orchestrator | 2025-05-31 19:17:54.949988 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-05-31 19:17:54.950036 | orchestrator | 2025-05-31 19:17:54.950061 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-05-31 19:17:56.605646 | orchestrator | ok: [testbed-manager] 2025-05-31 19:17:56.605708 | orchestrator | 2025-05-31 19:17:56.605729 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-05-31 19:17:56.605738 | orchestrator | 2025-05-31 19:17:56.605769 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-31 19:17:58.402325 | orchestrator | ok: [testbed-manager] 2025-05-31 19:17:58.402374 | orchestrator | 2025-05-31 19:17:58.402381 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-05-31 19:17:59.070292 | orchestrator | ok: [testbed-manager] 2025-05-31 19:17:59.070438 | orchestrator | 2025-05-31 19:17:59.070449 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-05-31 19:17:59.131470 | orchestrator | skipping: [testbed-manager] 2025-05-31 19:17:59.131545 | orchestrator | 2025-05-31 19:17:59.131558 | orchestrator | TASK [Update package cache] **************************************************** 2025-05-31 19:17:59.184500 | orchestrator | skipping: [testbed-manager] 2025-05-31 19:17:59.184585 | orchestrator | 2025-05-31 19:17:59.184599 | orchestrator | TASK [Install required packages] *********************************************** 2025-05-31 19:17:59.225639 | orchestrator | skipping: [testbed-manager] 2025-05-31 19:17:59.225708 | orchestrator | 2025-05-31 19:17:59.225715 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-05-31 19:17:59.264273 | orchestrator | skipping: [testbed-manager] 2025-05-31 19:17:59.264325 | orchestrator | 2025-05-31 19:17:59.264332 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-05-31 19:17:59.299165 | orchestrator | skipping: [testbed-manager] 2025-05-31 19:17:59.299220 | orchestrator | 2025-05-31 19:17:59.299227 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-05-31 19:17:59.338226 | orchestrator | skipping: [testbed-manager] 2025-05-31 19:17:59.338282 | orchestrator | 2025-05-31 19:17:59.338291 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-05-31 19:17:59.379272 | orchestrator | skipping: [testbed-manager] 2025-05-31 19:17:59.379323 | orchestrator | 2025-05-31 19:17:59.379330 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-05-31 19:18:00.190907 | orchestrator | changed: [testbed-manager] 2025-05-31 19:18:00.191521 | orchestrator | 2025-05-31 19:18:00.191545 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-05-31 19:20:59.697840 | orchestrator | changed: [testbed-manager] 2025-05-31 19:20:59.697969 | orchestrator | 2025-05-31 19:20:59.697988 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-05-31 19:22:12.230179 | orchestrator | changed: [testbed-manager] 2025-05-31 19:22:12.230282 | orchestrator | 2025-05-31 19:22:12.230302 | orchestrator | TASK [Install required packages] *********************************************** 2025-05-31 19:22:31.924538 | orchestrator | changed: [testbed-manager] 2025-05-31 19:22:31.924647 | orchestrator | 2025-05-31 19:22:31.924668 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-05-31 19:22:40.543024 | orchestrator | changed: [testbed-manager] 2025-05-31 19:22:40.543096 | orchestrator | 2025-05-31 19:22:40.543112 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-05-31 19:22:40.592135 | orchestrator | ok: [testbed-manager] 2025-05-31 19:22:40.592217 | orchestrator | 2025-05-31 19:22:40.592233 | orchestrator | TASK [Get current user] ******************************************************** 2025-05-31 19:22:41.366940 | orchestrator | ok: [testbed-manager] 2025-05-31 19:22:41.367000 | orchestrator | 2025-05-31 19:22:41.367011 | orchestrator | TASK [Create venv directory] *************************************************** 2025-05-31 19:22:42.072573 | orchestrator | changed: [testbed-manager] 2025-05-31 19:22:42.072629 | orchestrator | 2025-05-31 19:22:42.072639 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-05-31 19:22:48.417771 | orchestrator | changed: [testbed-manager] 2025-05-31 19:22:48.417869 | orchestrator | 2025-05-31 19:22:48.417907 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-05-31 19:22:54.453335 | orchestrator | changed: [testbed-manager] 2025-05-31 19:22:54.453396 | orchestrator | 2025-05-31 19:22:54.453411 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-05-31 19:22:57.085921 | orchestrator | changed: [testbed-manager] 2025-05-31 19:22:57.085973 | orchestrator | 2025-05-31 19:22:57.085981 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-05-31 19:22:58.860390 | orchestrator | changed: [testbed-manager] 2025-05-31 19:22:58.860478 | orchestrator | 2025-05-31 19:22:58.860495 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-05-31 19:22:59.947233 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-05-31 19:22:59.947279 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-05-31 19:22:59.947285 | orchestrator | 2025-05-31 19:22:59.947292 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-05-31 19:22:59.991291 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-05-31 19:22:59.991382 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-05-31 19:22:59.991398 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-05-31 19:22:59.991412 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-05-31 19:23:03.654683 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-05-31 19:23:03.654719 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-05-31 19:23:03.654724 | orchestrator | 2025-05-31 19:23:03.654729 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-05-31 19:23:04.255129 | orchestrator | changed: [testbed-manager] 2025-05-31 19:23:04.255185 | orchestrator | 2025-05-31 19:23:04.255192 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-05-31 19:24:24.933188 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-05-31 19:24:24.933237 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-05-31 19:24:24.933246 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-05-31 19:24:24.933252 | orchestrator | 2025-05-31 19:24:24.933259 | orchestrator | TASK [Install local collections] *********************************************** 2025-05-31 19:24:27.260619 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-05-31 19:24:27.260656 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-05-31 19:24:27.260661 | orchestrator | 2025-05-31 19:24:27.260666 | orchestrator | PLAY [Create operator user] **************************************************** 2025-05-31 19:24:27.260671 | orchestrator | 2025-05-31 19:24:27.260675 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-31 19:24:28.648695 | orchestrator | ok: [testbed-manager] 2025-05-31 19:24:28.648735 | orchestrator | 2025-05-31 19:24:28.648742 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-05-31 19:24:28.701440 | orchestrator | ok: [testbed-manager] 2025-05-31 19:24:28.701487 | orchestrator | 2025-05-31 19:24:28.701497 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-05-31 19:24:28.769789 | orchestrator | ok: [testbed-manager] 2025-05-31 19:24:28.769898 | orchestrator | 2025-05-31 19:24:28.769916 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-05-31 19:24:29.620443 | orchestrator | changed: [testbed-manager] 2025-05-31 19:24:29.620527 | orchestrator | 2025-05-31 19:24:29.620542 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-05-31 19:24:30.392096 | orchestrator | changed: [testbed-manager] 2025-05-31 19:24:30.392183 | orchestrator | 2025-05-31 19:24:30.392200 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-05-31 19:24:31.795974 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-05-31 19:24:31.796017 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-05-31 19:24:31.796025 | orchestrator | 2025-05-31 19:24:31.796039 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-05-31 19:24:33.241270 | orchestrator | changed: [testbed-manager] 2025-05-31 19:24:33.241356 | orchestrator | 2025-05-31 19:24:33.241366 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-05-31 19:24:35.158156 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-05-31 19:24:35.158249 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-05-31 19:24:35.158263 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-05-31 19:24:35.158275 | orchestrator | 2025-05-31 19:24:35.158287 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-05-31 19:24:35.763139 | orchestrator | changed: [testbed-manager] 2025-05-31 19:24:35.763232 | orchestrator | 2025-05-31 19:24:35.763248 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-05-31 19:24:35.825615 | orchestrator | skipping: [testbed-manager] 2025-05-31 19:24:35.825686 | orchestrator | 2025-05-31 19:24:35.825700 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-05-31 19:24:36.697978 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-31 19:24:36.698103 | orchestrator | changed: [testbed-manager] 2025-05-31 19:24:36.698122 | orchestrator | 2025-05-31 19:24:36.698136 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-05-31 19:24:36.733681 | orchestrator | skipping: [testbed-manager] 2025-05-31 19:24:36.733898 | orchestrator | 2025-05-31 19:24:36.733920 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-05-31 19:24:36.764877 | orchestrator | skipping: [testbed-manager] 2025-05-31 19:24:36.764947 | orchestrator | 2025-05-31 19:24:36.764962 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-05-31 19:24:36.794288 | orchestrator | skipping: [testbed-manager] 2025-05-31 19:24:36.794360 | orchestrator | 2025-05-31 19:24:36.794374 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-05-31 19:24:36.842761 | orchestrator | skipping: [testbed-manager] 2025-05-31 19:24:36.842857 | orchestrator | 2025-05-31 19:24:36.842872 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-05-31 19:24:37.804217 | orchestrator | ok: [testbed-manager] 2025-05-31 19:24:37.804307 | orchestrator | 2025-05-31 19:24:37.804322 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-05-31 19:24:37.804336 | orchestrator | 2025-05-31 19:24:37.804349 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-31 19:24:39.264473 | orchestrator | ok: [testbed-manager] 2025-05-31 19:24:39.264989 | orchestrator | 2025-05-31 19:24:39.265015 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-05-31 19:24:40.226923 | orchestrator | changed: [testbed-manager] 2025-05-31 19:24:40.227019 | orchestrator | 2025-05-31 19:24:40.227035 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 19:24:40.227049 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-05-31 19:24:40.227061 | orchestrator | 2025-05-31 19:24:40.454694 | orchestrator | ok: Runtime: 0:06:46.327293 2025-05-31 19:24:40.474029 | 2025-05-31 19:24:40.474179 | TASK [Point out that the log in on the manager is now possible] 2025-05-31 19:24:40.512239 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-05-31 19:24:40.522504 | 2025-05-31 19:24:40.522633 | TASK [Point out that the following task takes some time and does not give any output] 2025-05-31 19:24:40.559901 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-05-31 19:24:40.569342 | 2025-05-31 19:24:40.569479 | TASK [Run manager part 1 + 2] 2025-05-31 19:24:41.401497 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-05-31 19:24:41.454891 | orchestrator | 2025-05-31 19:24:41.454942 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-05-31 19:24:41.454949 | orchestrator | 2025-05-31 19:24:41.454962 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-31 19:24:44.333625 | orchestrator | ok: [testbed-manager] 2025-05-31 19:24:44.333684 | orchestrator | 2025-05-31 19:24:44.333713 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-05-31 19:24:44.379562 | orchestrator | skipping: [testbed-manager] 2025-05-31 19:24:44.379623 | orchestrator | 2025-05-31 19:24:44.379634 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-05-31 19:24:44.429698 | orchestrator | ok: [testbed-manager] 2025-05-31 19:24:44.429753 | orchestrator | 2025-05-31 19:24:44.429762 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-05-31 19:24:44.485779 | orchestrator | ok: [testbed-manager] 2025-05-31 19:24:44.485891 | orchestrator | 2025-05-31 19:24:44.485902 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-05-31 19:24:44.561945 | orchestrator | ok: [testbed-manager] 2025-05-31 19:24:44.562152 | orchestrator | 2025-05-31 19:24:44.562171 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-05-31 19:24:44.620392 | orchestrator | ok: [testbed-manager] 2025-05-31 19:24:44.620440 | orchestrator | 2025-05-31 19:24:44.620449 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-05-31 19:24:44.676021 | orchestrator | included: /home/zuul-testbed04/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-05-31 19:24:44.676076 | orchestrator | 2025-05-31 19:24:44.676083 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-05-31 19:24:45.418740 | orchestrator | ok: [testbed-manager] 2025-05-31 19:24:45.418826 | orchestrator | 2025-05-31 19:24:45.418838 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-05-31 19:24:45.468070 | orchestrator | skipping: [testbed-manager] 2025-05-31 19:24:45.468123 | orchestrator | 2025-05-31 19:24:45.468249 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-05-31 19:24:46.873234 | orchestrator | changed: [testbed-manager] 2025-05-31 19:24:46.873295 | orchestrator | 2025-05-31 19:24:46.873305 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-05-31 19:24:47.445395 | orchestrator | ok: [testbed-manager] 2025-05-31 19:24:47.445450 | orchestrator | 2025-05-31 19:24:47.445458 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-05-31 19:24:48.576035 | orchestrator | changed: [testbed-manager] 2025-05-31 19:24:48.576087 | orchestrator | 2025-05-31 19:24:48.576097 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-05-31 19:25:01.539470 | orchestrator | changed: [testbed-manager] 2025-05-31 19:25:01.539550 | orchestrator | 2025-05-31 19:25:01.539566 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-05-31 19:25:02.230954 | orchestrator | ok: [testbed-manager] 2025-05-31 19:25:02.231133 | orchestrator | 2025-05-31 19:25:02.231153 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-05-31 19:25:02.287315 | orchestrator | skipping: [testbed-manager] 2025-05-31 19:25:02.287392 | orchestrator | 2025-05-31 19:25:02.287408 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-05-31 19:25:03.218091 | orchestrator | changed: [testbed-manager] 2025-05-31 19:25:03.218182 | orchestrator | 2025-05-31 19:25:03.218199 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-05-31 19:25:04.173612 | orchestrator | changed: [testbed-manager] 2025-05-31 19:25:04.173654 | orchestrator | 2025-05-31 19:25:04.173662 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-05-31 19:25:04.747469 | orchestrator | changed: [testbed-manager] 2025-05-31 19:25:04.747556 | orchestrator | 2025-05-31 19:25:04.747572 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-05-31 19:25:04.784833 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-05-31 19:25:04.784946 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-05-31 19:25:04.784962 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-05-31 19:25:04.784974 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-05-31 19:25:07.085123 | orchestrator | changed: [testbed-manager] 2025-05-31 19:25:07.085230 | orchestrator | 2025-05-31 19:25:07.085249 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-05-31 19:25:15.926899 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-05-31 19:25:15.926982 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-05-31 19:25:15.926997 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-05-31 19:25:15.927007 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-05-31 19:25:15.927024 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-05-31 19:25:15.927033 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-05-31 19:25:15.927042 | orchestrator | 2025-05-31 19:25:15.927051 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-05-31 19:25:16.967559 | orchestrator | changed: [testbed-manager] 2025-05-31 19:25:16.967598 | orchestrator | 2025-05-31 19:25:16.967605 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-05-31 19:25:17.014158 | orchestrator | skipping: [testbed-manager] 2025-05-31 19:25:17.014203 | orchestrator | 2025-05-31 19:25:17.014212 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-05-31 19:25:20.068137 | orchestrator | changed: [testbed-manager] 2025-05-31 19:25:20.068233 | orchestrator | 2025-05-31 19:25:20.068251 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-05-31 19:25:20.111352 | orchestrator | skipping: [testbed-manager] 2025-05-31 19:25:20.111413 | orchestrator | 2025-05-31 19:25:20.111422 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-05-31 19:26:52.033212 | orchestrator | changed: [testbed-manager] 2025-05-31 19:26:52.033306 | orchestrator | 2025-05-31 19:26:52.033326 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-05-31 19:26:53.123737 | orchestrator | ok: [testbed-manager] 2025-05-31 19:26:53.123875 | orchestrator | 2025-05-31 19:26:53.123902 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 19:26:53.123923 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-05-31 19:26:53.123942 | orchestrator | 2025-05-31 19:26:53.697477 | orchestrator | ok: Runtime: 0:02:12.345930 2025-05-31 19:26:53.716363 | 2025-05-31 19:26:53.716533 | TASK [Reboot manager] 2025-05-31 19:26:55.253529 | orchestrator | ok: Runtime: 0:00:00.923329 2025-05-31 19:26:55.262419 | 2025-05-31 19:26:55.262543 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-05-31 19:27:09.500076 | orchestrator | ok 2025-05-31 19:27:09.510078 | 2025-05-31 19:27:09.510208 | TASK [Wait a little longer for the manager so that everything is ready] 2025-05-31 19:28:09.561873 | orchestrator | ok 2025-05-31 19:28:09.574023 | 2025-05-31 19:28:09.574201 | TASK [Deploy manager + bootstrap nodes] 2025-05-31 19:28:11.995150 | orchestrator | 2025-05-31 19:28:11.995273 | orchestrator | # DEPLOY MANAGER 2025-05-31 19:28:11.995282 | orchestrator | 2025-05-31 19:28:11.995288 | orchestrator | + set -e 2025-05-31 19:28:11.995293 | orchestrator | + echo 2025-05-31 19:28:11.995299 | orchestrator | + echo '# DEPLOY MANAGER' 2025-05-31 19:28:11.995305 | orchestrator | + echo 2025-05-31 19:28:11.995330 | orchestrator | + cat /opt/manager-vars.sh 2025-05-31 19:28:11.998075 | orchestrator | export NUMBER_OF_NODES=6 2025-05-31 19:28:11.998088 | orchestrator | 2025-05-31 19:28:11.998093 | orchestrator | export CEPH_VERSION=reef 2025-05-31 19:28:11.998098 | orchestrator | export CONFIGURATION_VERSION=main 2025-05-31 19:28:11.998103 | orchestrator | export MANAGER_VERSION=latest 2025-05-31 19:28:11.998113 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-05-31 19:28:11.998117 | orchestrator | 2025-05-31 19:28:11.998124 | orchestrator | export ARA=false 2025-05-31 19:28:11.998128 | orchestrator | export DEPLOY_MODE=manager 2025-05-31 19:28:11.998136 | orchestrator | export TEMPEST=false 2025-05-31 19:28:11.998140 | orchestrator | export IS_ZUUL=true 2025-05-31 19:28:11.998143 | orchestrator | 2025-05-31 19:28:11.998150 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.24 2025-05-31 19:28:11.998155 | orchestrator | export EXTERNAL_API=false 2025-05-31 19:28:11.998159 | orchestrator | 2025-05-31 19:28:11.998163 | orchestrator | export IMAGE_USER=ubuntu 2025-05-31 19:28:11.998168 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-05-31 19:28:11.998172 | orchestrator | 2025-05-31 19:28:11.998176 | orchestrator | export CEPH_STACK=ceph-ansible 2025-05-31 19:28:11.998350 | orchestrator | 2025-05-31 19:28:11.998357 | orchestrator | + echo 2025-05-31 19:28:11.998362 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-05-31 19:28:11.999324 | orchestrator | ++ export INTERACTIVE=false 2025-05-31 19:28:11.999333 | orchestrator | ++ INTERACTIVE=false 2025-05-31 19:28:11.999337 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-05-31 19:28:11.999341 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-05-31 19:28:11.999383 | orchestrator | + source /opt/manager-vars.sh 2025-05-31 19:28:11.999389 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-05-31 19:28:11.999393 | orchestrator | ++ NUMBER_OF_NODES=6 2025-05-31 19:28:11.999484 | orchestrator | ++ export CEPH_VERSION=reef 2025-05-31 19:28:11.999490 | orchestrator | ++ CEPH_VERSION=reef 2025-05-31 19:28:11.999494 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-05-31 19:28:11.999499 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-05-31 19:28:11.999503 | orchestrator | ++ export MANAGER_VERSION=latest 2025-05-31 19:28:11.999507 | orchestrator | ++ MANAGER_VERSION=latest 2025-05-31 19:28:11.999510 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-05-31 19:28:11.999520 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-05-31 19:28:11.999526 | orchestrator | ++ export ARA=false 2025-05-31 19:28:11.999530 | orchestrator | ++ ARA=false 2025-05-31 19:28:11.999533 | orchestrator | ++ export DEPLOY_MODE=manager 2025-05-31 19:28:11.999537 | orchestrator | ++ DEPLOY_MODE=manager 2025-05-31 19:28:11.999553 | orchestrator | ++ export TEMPEST=false 2025-05-31 19:28:11.999557 | orchestrator | ++ TEMPEST=false 2025-05-31 19:28:11.999561 | orchestrator | ++ export IS_ZUUL=true 2025-05-31 19:28:11.999565 | orchestrator | ++ IS_ZUUL=true 2025-05-31 19:28:11.999568 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.24 2025-05-31 19:28:11.999572 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.24 2025-05-31 19:28:11.999576 | orchestrator | ++ export EXTERNAL_API=false 2025-05-31 19:28:11.999586 | orchestrator | ++ EXTERNAL_API=false 2025-05-31 19:28:11.999591 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-05-31 19:28:11.999594 | orchestrator | ++ IMAGE_USER=ubuntu 2025-05-31 19:28:11.999598 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-05-31 19:28:11.999602 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-05-31 19:28:11.999606 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-05-31 19:28:11.999611 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-05-31 19:28:11.999616 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-05-31 19:28:12.050499 | orchestrator | + docker version 2025-05-31 19:28:12.324160 | orchestrator | Client: Docker Engine - Community 2025-05-31 19:28:12.324236 | orchestrator | Version: 27.5.1 2025-05-31 19:28:12.324246 | orchestrator | API version: 1.47 2025-05-31 19:28:12.324251 | orchestrator | Go version: go1.22.11 2025-05-31 19:28:12.324256 | orchestrator | Git commit: 9f9e405 2025-05-31 19:28:12.324260 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-05-31 19:28:12.324266 | orchestrator | OS/Arch: linux/amd64 2025-05-31 19:28:12.324270 | orchestrator | Context: default 2025-05-31 19:28:12.324274 | orchestrator | 2025-05-31 19:28:12.324279 | orchestrator | Server: Docker Engine - Community 2025-05-31 19:28:12.324283 | orchestrator | Engine: 2025-05-31 19:28:12.324288 | orchestrator | Version: 27.5.1 2025-05-31 19:28:12.324292 | orchestrator | API version: 1.47 (minimum version 1.24) 2025-05-31 19:28:12.324318 | orchestrator | Go version: go1.22.11 2025-05-31 19:28:12.324322 | orchestrator | Git commit: 4c9b3b0 2025-05-31 19:28:12.324327 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-05-31 19:28:12.324331 | orchestrator | OS/Arch: linux/amd64 2025-05-31 19:28:12.324336 | orchestrator | Experimental: false 2025-05-31 19:28:12.324342 | orchestrator | containerd: 2025-05-31 19:28:12.324349 | orchestrator | Version: 1.7.27 2025-05-31 19:28:12.324356 | orchestrator | GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da 2025-05-31 19:28:12.324363 | orchestrator | runc: 2025-05-31 19:28:12.324369 | orchestrator | Version: 1.2.5 2025-05-31 19:28:12.324376 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-05-31 19:28:12.324383 | orchestrator | docker-init: 2025-05-31 19:28:12.324389 | orchestrator | Version: 0.19.0 2025-05-31 19:28:12.324397 | orchestrator | GitCommit: de40ad0 2025-05-31 19:28:12.327407 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-05-31 19:28:12.334438 | orchestrator | + set -e 2025-05-31 19:28:12.334453 | orchestrator | + source /opt/manager-vars.sh 2025-05-31 19:28:12.334458 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-05-31 19:28:12.334462 | orchestrator | ++ NUMBER_OF_NODES=6 2025-05-31 19:28:12.334466 | orchestrator | ++ export CEPH_VERSION=reef 2025-05-31 19:28:12.334470 | orchestrator | ++ CEPH_VERSION=reef 2025-05-31 19:28:12.334474 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-05-31 19:28:12.334478 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-05-31 19:28:12.334482 | orchestrator | ++ export MANAGER_VERSION=latest 2025-05-31 19:28:12.334486 | orchestrator | ++ MANAGER_VERSION=latest 2025-05-31 19:28:12.334490 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-05-31 19:28:12.334494 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-05-31 19:28:12.334498 | orchestrator | ++ export ARA=false 2025-05-31 19:28:12.334502 | orchestrator | ++ ARA=false 2025-05-31 19:28:12.334520 | orchestrator | ++ export DEPLOY_MODE=manager 2025-05-31 19:28:12.334528 | orchestrator | ++ DEPLOY_MODE=manager 2025-05-31 19:28:12.334532 | orchestrator | ++ export TEMPEST=false 2025-05-31 19:28:12.334535 | orchestrator | ++ TEMPEST=false 2025-05-31 19:28:12.334539 | orchestrator | ++ export IS_ZUUL=true 2025-05-31 19:28:12.334543 | orchestrator | ++ IS_ZUUL=true 2025-05-31 19:28:12.334547 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.24 2025-05-31 19:28:12.334551 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.24 2025-05-31 19:28:12.334555 | orchestrator | ++ export EXTERNAL_API=false 2025-05-31 19:28:12.334559 | orchestrator | ++ EXTERNAL_API=false 2025-05-31 19:28:12.334563 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-05-31 19:28:12.334567 | orchestrator | ++ IMAGE_USER=ubuntu 2025-05-31 19:28:12.334571 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-05-31 19:28:12.334574 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-05-31 19:28:12.334578 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-05-31 19:28:12.334582 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-05-31 19:28:12.334586 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-05-31 19:28:12.334591 | orchestrator | ++ export INTERACTIVE=false 2025-05-31 19:28:12.334595 | orchestrator | ++ INTERACTIVE=false 2025-05-31 19:28:12.334599 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-05-31 19:28:12.334605 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-05-31 19:28:12.334965 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-05-31 19:28:12.335000 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-05-31 19:28:12.335005 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2025-05-31 19:28:12.342392 | orchestrator | + set -e 2025-05-31 19:28:12.342415 | orchestrator | + VERSION=reef 2025-05-31 19:28:12.343382 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2025-05-31 19:28:12.349212 | orchestrator | + [[ -n ceph_version: reef ]] 2025-05-31 19:28:12.349226 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2025-05-31 19:28:12.354268 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2025-05-31 19:28:12.359355 | orchestrator | + set -e 2025-05-31 19:28:12.359366 | orchestrator | + VERSION=2024.2 2025-05-31 19:28:12.359906 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2025-05-31 19:28:12.364043 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2025-05-31 19:28:12.364061 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2025-05-31 19:28:12.369037 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-05-31 19:28:12.370029 | orchestrator | ++ semver latest 7.0.0 2025-05-31 19:28:12.428845 | orchestrator | + [[ -1 -ge 0 ]] 2025-05-31 19:28:12.428872 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-05-31 19:28:12.428876 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-05-31 19:28:12.428881 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-05-31 19:28:12.468276 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-05-31 19:28:12.470682 | orchestrator | + source /opt/venv/bin/activate 2025-05-31 19:28:12.471824 | orchestrator | ++ deactivate nondestructive 2025-05-31 19:28:12.471868 | orchestrator | ++ '[' -n '' ']' 2025-05-31 19:28:12.472051 | orchestrator | ++ '[' -n '' ']' 2025-05-31 19:28:12.472059 | orchestrator | ++ hash -r 2025-05-31 19:28:12.472063 | orchestrator | ++ '[' -n '' ']' 2025-05-31 19:28:12.472067 | orchestrator | ++ unset VIRTUAL_ENV 2025-05-31 19:28:12.472071 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-05-31 19:28:12.472096 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-05-31 19:28:12.472102 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-05-31 19:28:12.472155 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-05-31 19:28:12.472161 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-05-31 19:28:12.472165 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-05-31 19:28:12.472297 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-31 19:28:12.472305 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-31 19:28:12.472356 | orchestrator | ++ export PATH 2025-05-31 19:28:12.472362 | orchestrator | ++ '[' -n '' ']' 2025-05-31 19:28:12.472366 | orchestrator | ++ '[' -z '' ']' 2025-05-31 19:28:12.472392 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-05-31 19:28:12.472396 | orchestrator | ++ PS1='(venv) ' 2025-05-31 19:28:12.472477 | orchestrator | ++ export PS1 2025-05-31 19:28:12.472483 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-05-31 19:28:12.472487 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-05-31 19:28:12.472491 | orchestrator | ++ hash -r 2025-05-31 19:28:12.472576 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-05-31 19:28:13.651516 | orchestrator | 2025-05-31 19:28:13.651595 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-05-31 19:28:13.651602 | orchestrator | 2025-05-31 19:28:13.651607 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-05-31 19:28:14.208488 | orchestrator | ok: [testbed-manager] 2025-05-31 19:28:14.208586 | orchestrator | 2025-05-31 19:28:14.208597 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-05-31 19:28:15.164348 | orchestrator | changed: [testbed-manager] 2025-05-31 19:28:15.164433 | orchestrator | 2025-05-31 19:28:15.164441 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-05-31 19:28:15.164447 | orchestrator | 2025-05-31 19:28:15.164451 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-31 19:28:17.519625 | orchestrator | ok: [testbed-manager] 2025-05-31 19:28:17.519746 | orchestrator | 2025-05-31 19:28:17.519760 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-05-31 19:28:17.570706 | orchestrator | ok: [testbed-manager] 2025-05-31 19:28:17.570801 | orchestrator | 2025-05-31 19:28:17.570813 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-05-31 19:28:18.014797 | orchestrator | changed: [testbed-manager] 2025-05-31 19:28:18.014877 | orchestrator | 2025-05-31 19:28:18.014885 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2025-05-31 19:28:18.056130 | orchestrator | skipping: [testbed-manager] 2025-05-31 19:28:18.056193 | orchestrator | 2025-05-31 19:28:18.056199 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-05-31 19:28:18.390366 | orchestrator | changed: [testbed-manager] 2025-05-31 19:28:18.390438 | orchestrator | 2025-05-31 19:28:18.390444 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-05-31 19:28:18.441188 | orchestrator | skipping: [testbed-manager] 2025-05-31 19:28:18.441218 | orchestrator | 2025-05-31 19:28:18.441224 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-05-31 19:28:18.757772 | orchestrator | ok: [testbed-manager] 2025-05-31 19:28:18.757883 | orchestrator | 2025-05-31 19:28:18.757901 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-05-31 19:28:18.881435 | orchestrator | skipping: [testbed-manager] 2025-05-31 19:28:18.881540 | orchestrator | 2025-05-31 19:28:18.881554 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2025-05-31 19:28:18.881567 | orchestrator | 2025-05-31 19:28:18.881581 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-31 19:28:20.658825 | orchestrator | ok: [testbed-manager] 2025-05-31 19:28:20.659046 | orchestrator | 2025-05-31 19:28:20.659085 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-05-31 19:28:20.758891 | orchestrator | included: osism.services.traefik for testbed-manager 2025-05-31 19:28:20.759029 | orchestrator | 2025-05-31 19:28:20.759045 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-05-31 19:28:20.813251 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-05-31 19:28:20.813334 | orchestrator | 2025-05-31 19:28:20.813348 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-05-31 19:28:21.884486 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-05-31 19:28:21.884594 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-05-31 19:28:21.884610 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-05-31 19:28:21.884621 | orchestrator | 2025-05-31 19:28:21.884633 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-05-31 19:28:23.637422 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-05-31 19:28:23.637534 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-05-31 19:28:23.637551 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-05-31 19:28:23.637564 | orchestrator | 2025-05-31 19:28:23.637576 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-05-31 19:28:24.249305 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-31 19:28:24.249421 | orchestrator | changed: [testbed-manager] 2025-05-31 19:28:24.249436 | orchestrator | 2025-05-31 19:28:24.249449 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-05-31 19:28:24.898197 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-31 19:28:24.898304 | orchestrator | changed: [testbed-manager] 2025-05-31 19:28:24.898319 | orchestrator | 2025-05-31 19:28:24.898332 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-05-31 19:28:24.956400 | orchestrator | skipping: [testbed-manager] 2025-05-31 19:28:24.956501 | orchestrator | 2025-05-31 19:28:24.956516 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-05-31 19:28:25.323552 | orchestrator | ok: [testbed-manager] 2025-05-31 19:28:25.323654 | orchestrator | 2025-05-31 19:28:25.323669 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-05-31 19:28:25.391667 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-05-31 19:28:25.391777 | orchestrator | 2025-05-31 19:28:25.391794 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-05-31 19:28:26.460898 | orchestrator | changed: [testbed-manager] 2025-05-31 19:28:26.461090 | orchestrator | 2025-05-31 19:28:26.461109 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-05-31 19:28:27.245387 | orchestrator | changed: [testbed-manager] 2025-05-31 19:28:27.245498 | orchestrator | 2025-05-31 19:28:27.245514 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-05-31 19:28:38.284628 | orchestrator | changed: [testbed-manager] 2025-05-31 19:28:38.284760 | orchestrator | 2025-05-31 19:28:38.284779 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-05-31 19:28:38.330423 | orchestrator | skipping: [testbed-manager] 2025-05-31 19:28:38.330516 | orchestrator | 2025-05-31 19:28:38.330531 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-05-31 19:28:38.330545 | orchestrator | 2025-05-31 19:28:38.330556 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-31 19:28:40.119080 | orchestrator | ok: [testbed-manager] 2025-05-31 19:28:40.119184 | orchestrator | 2025-05-31 19:28:40.119229 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-05-31 19:28:40.228314 | orchestrator | included: osism.services.manager for testbed-manager 2025-05-31 19:28:40.228419 | orchestrator | 2025-05-31 19:28:40.228434 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-05-31 19:28:40.286180 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-05-31 19:28:40.286282 | orchestrator | 2025-05-31 19:28:40.286293 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-05-31 19:28:42.580725 | orchestrator | ok: [testbed-manager] 2025-05-31 19:28:42.580833 | orchestrator | 2025-05-31 19:28:42.580850 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-05-31 19:28:42.635936 | orchestrator | ok: [testbed-manager] 2025-05-31 19:28:42.636063 | orchestrator | 2025-05-31 19:28:42.636082 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-05-31 19:28:42.762200 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-05-31 19:28:42.762287 | orchestrator | 2025-05-31 19:28:42.762300 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-05-31 19:28:45.491636 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-05-31 19:28:45.491754 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-05-31 19:28:45.491769 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-05-31 19:28:45.491782 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-05-31 19:28:45.491793 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-05-31 19:28:45.491804 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-05-31 19:28:45.491815 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-05-31 19:28:45.491826 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-05-31 19:28:45.491837 | orchestrator | 2025-05-31 19:28:45.491850 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2025-05-31 19:28:46.120795 | orchestrator | changed: [testbed-manager] 2025-05-31 19:28:46.120900 | orchestrator | 2025-05-31 19:28:46.120915 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-05-31 19:28:46.745639 | orchestrator | changed: [testbed-manager] 2025-05-31 19:28:46.745740 | orchestrator | 2025-05-31 19:28:46.745755 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-05-31 19:28:46.816011 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-05-31 19:28:46.816086 | orchestrator | 2025-05-31 19:28:46.816099 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-05-31 19:28:47.981019 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-05-31 19:28:47.981112 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-05-31 19:28:47.981124 | orchestrator | 2025-05-31 19:28:47.981134 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-05-31 19:28:48.581677 | orchestrator | changed: [testbed-manager] 2025-05-31 19:28:48.582555 | orchestrator | 2025-05-31 19:28:48.582586 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-05-31 19:28:48.640843 | orchestrator | skipping: [testbed-manager] 2025-05-31 19:28:48.640931 | orchestrator | 2025-05-31 19:28:48.640944 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-05-31 19:28:48.715592 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-05-31 19:28:48.715696 | orchestrator | 2025-05-31 19:28:48.715712 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-05-31 19:28:50.046480 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-31 19:28:50.046590 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-31 19:28:50.046607 | orchestrator | changed: [testbed-manager] 2025-05-31 19:28:50.046621 | orchestrator | 2025-05-31 19:28:50.046634 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-05-31 19:28:50.669062 | orchestrator | changed: [testbed-manager] 2025-05-31 19:28:50.669163 | orchestrator | 2025-05-31 19:28:50.669179 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-05-31 19:28:50.724145 | orchestrator | skipping: [testbed-manager] 2025-05-31 19:28:50.724226 | orchestrator | 2025-05-31 19:28:50.724239 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-05-31 19:28:50.831389 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-05-31 19:28:50.831476 | orchestrator | 2025-05-31 19:28:50.831493 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-05-31 19:28:51.326703 | orchestrator | changed: [testbed-manager] 2025-05-31 19:28:51.326811 | orchestrator | 2025-05-31 19:28:51.326829 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-05-31 19:28:51.718447 | orchestrator | changed: [testbed-manager] 2025-05-31 19:28:51.718545 | orchestrator | 2025-05-31 19:28:51.718558 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-05-31 19:28:52.905069 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-05-31 19:28:52.905189 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-05-31 19:28:52.905215 | orchestrator | 2025-05-31 19:28:52.905237 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-05-31 19:28:53.501857 | orchestrator | changed: [testbed-manager] 2025-05-31 19:28:53.501964 | orchestrator | 2025-05-31 19:28:53.502156 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-05-31 19:28:53.898165 | orchestrator | ok: [testbed-manager] 2025-05-31 19:28:53.898270 | orchestrator | 2025-05-31 19:28:53.898284 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-05-31 19:28:54.242326 | orchestrator | changed: [testbed-manager] 2025-05-31 19:28:54.242408 | orchestrator | 2025-05-31 19:28:54.242415 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-05-31 19:28:54.284685 | orchestrator | skipping: [testbed-manager] 2025-05-31 19:28:54.284770 | orchestrator | 2025-05-31 19:28:54.284784 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-05-31 19:28:54.353784 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-05-31 19:28:54.353855 | orchestrator | 2025-05-31 19:28:54.353868 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-05-31 19:28:54.396266 | orchestrator | ok: [testbed-manager] 2025-05-31 19:28:54.396339 | orchestrator | 2025-05-31 19:28:54.396351 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-05-31 19:28:56.253473 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-05-31 19:28:56.253594 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-05-31 19:28:56.253610 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-05-31 19:28:56.253622 | orchestrator | 2025-05-31 19:28:56.253635 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-05-31 19:28:56.920743 | orchestrator | changed: [testbed-manager] 2025-05-31 19:28:56.920844 | orchestrator | 2025-05-31 19:28:56.920856 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-05-31 19:28:57.601430 | orchestrator | changed: [testbed-manager] 2025-05-31 19:28:57.601535 | orchestrator | 2025-05-31 19:28:57.601551 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-05-31 19:28:58.279286 | orchestrator | changed: [testbed-manager] 2025-05-31 19:28:58.279393 | orchestrator | 2025-05-31 19:28:58.279409 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-05-31 19:28:58.355569 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-05-31 19:28:58.355674 | orchestrator | 2025-05-31 19:28:58.355689 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-05-31 19:28:58.397384 | orchestrator | ok: [testbed-manager] 2025-05-31 19:28:58.397472 | orchestrator | 2025-05-31 19:28:58.397486 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-05-31 19:28:59.093866 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-05-31 19:28:59.093979 | orchestrator | 2025-05-31 19:28:59.094096 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-05-31 19:28:59.191403 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-05-31 19:28:59.191501 | orchestrator | 2025-05-31 19:28:59.191515 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-05-31 19:28:59.875650 | orchestrator | changed: [testbed-manager] 2025-05-31 19:28:59.875761 | orchestrator | 2025-05-31 19:28:59.875777 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-05-31 19:29:00.456693 | orchestrator | ok: [testbed-manager] 2025-05-31 19:29:00.456795 | orchestrator | 2025-05-31 19:29:00.456810 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-05-31 19:29:00.513737 | orchestrator | skipping: [testbed-manager] 2025-05-31 19:29:00.513831 | orchestrator | 2025-05-31 19:29:00.513845 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-05-31 19:29:00.573501 | orchestrator | ok: [testbed-manager] 2025-05-31 19:29:00.573561 | orchestrator | 2025-05-31 19:29:00.573573 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-05-31 19:29:01.380336 | orchestrator | changed: [testbed-manager] 2025-05-31 19:29:01.380431 | orchestrator | 2025-05-31 19:29:01.380448 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-05-31 19:30:01.108621 | orchestrator | changed: [testbed-manager] 2025-05-31 19:30:01.108735 | orchestrator | 2025-05-31 19:30:01.108752 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-05-31 19:30:02.138498 | orchestrator | ok: [testbed-manager] 2025-05-31 19:30:02.138556 | orchestrator | 2025-05-31 19:30:02.138570 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2025-05-31 19:30:02.194933 | orchestrator | skipping: [testbed-manager] 2025-05-31 19:30:02.194975 | orchestrator | 2025-05-31 19:30:02.194987 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-05-31 19:30:04.951644 | orchestrator | changed: [testbed-manager] 2025-05-31 19:30:04.951750 | orchestrator | 2025-05-31 19:30:04.951768 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-05-31 19:30:05.005450 | orchestrator | ok: [testbed-manager] 2025-05-31 19:30:05.005525 | orchestrator | 2025-05-31 19:30:05.005538 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-05-31 19:30:05.005551 | orchestrator | 2025-05-31 19:30:05.005562 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-05-31 19:30:05.062936 | orchestrator | skipping: [testbed-manager] 2025-05-31 19:30:05.062995 | orchestrator | 2025-05-31 19:30:05.063009 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-05-31 19:31:05.113654 | orchestrator | Pausing for 60 seconds 2025-05-31 19:31:05.113782 | orchestrator | changed: [testbed-manager] 2025-05-31 19:31:05.113798 | orchestrator | 2025-05-31 19:31:05.113812 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-05-31 19:31:09.687940 | orchestrator | changed: [testbed-manager] 2025-05-31 19:31:09.688052 | orchestrator | 2025-05-31 19:31:09.688068 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-05-31 19:31:51.254105 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-05-31 19:31:51.254253 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-05-31 19:31:51.254271 | orchestrator | changed: [testbed-manager] 2025-05-31 19:31:51.254284 | orchestrator | 2025-05-31 19:31:51.254296 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-05-31 19:31:59.704573 | orchestrator | changed: [testbed-manager] 2025-05-31 19:31:59.704742 | orchestrator | 2025-05-31 19:31:59.704770 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-05-31 19:31:59.783196 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-05-31 19:31:59.783341 | orchestrator | 2025-05-31 19:31:59.783356 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-05-31 19:31:59.783369 | orchestrator | 2025-05-31 19:31:59.783380 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-05-31 19:31:59.837894 | orchestrator | skipping: [testbed-manager] 2025-05-31 19:31:59.837980 | orchestrator | 2025-05-31 19:31:59.837992 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 19:31:59.838006 | orchestrator | testbed-manager : ok=64 changed=35 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-05-31 19:31:59.838072 | orchestrator | 2025-05-31 19:31:59.927702 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-05-31 19:31:59.927795 | orchestrator | + deactivate 2025-05-31 19:31:59.927809 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-05-31 19:31:59.927823 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-31 19:31:59.927834 | orchestrator | + export PATH 2025-05-31 19:31:59.927845 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-05-31 19:31:59.927856 | orchestrator | + '[' -n '' ']' 2025-05-31 19:31:59.927867 | orchestrator | + hash -r 2025-05-31 19:31:59.927878 | orchestrator | + '[' -n '' ']' 2025-05-31 19:31:59.927888 | orchestrator | + unset VIRTUAL_ENV 2025-05-31 19:31:59.927899 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-05-31 19:31:59.927932 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-05-31 19:31:59.927943 | orchestrator | + unset -f deactivate 2025-05-31 19:31:59.927955 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-05-31 19:31:59.934977 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-05-31 19:31:59.935027 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-05-31 19:31:59.935039 | orchestrator | + local max_attempts=60 2025-05-31 19:31:59.935050 | orchestrator | + local name=ceph-ansible 2025-05-31 19:31:59.935061 | orchestrator | + local attempt_num=1 2025-05-31 19:31:59.935776 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-05-31 19:31:59.969874 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-31 19:31:59.969943 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-05-31 19:31:59.969955 | orchestrator | + local max_attempts=60 2025-05-31 19:31:59.969966 | orchestrator | + local name=kolla-ansible 2025-05-31 19:31:59.969977 | orchestrator | + local attempt_num=1 2025-05-31 19:31:59.970524 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-05-31 19:32:00.005818 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-31 19:32:00.005880 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-05-31 19:32:00.005893 | orchestrator | + local max_attempts=60 2025-05-31 19:32:00.005904 | orchestrator | + local name=osism-ansible 2025-05-31 19:32:00.005915 | orchestrator | + local attempt_num=1 2025-05-31 19:32:00.006800 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-05-31 19:32:00.041604 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-31 19:32:00.041663 | orchestrator | + [[ true == \t\r\u\e ]] 2025-05-31 19:32:00.041676 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-05-31 19:32:00.782800 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-05-31 19:32:00.964330 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-05-31 19:32:00.964429 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2025-05-31 19:32:00.964444 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2025-05-31 19:32:00.964456 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2025-05-31 19:32:00.964470 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2025-05-31 19:32:00.964524 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2025-05-31 19:32:00.964555 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2025-05-31 19:32:00.964571 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 51 seconds (healthy) 2025-05-31 19:32:00.964588 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2025-05-31 19:32:00.964605 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.7.2 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2025-05-31 19:32:00.964622 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2025-05-31 19:32:00.964639 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.4-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2025-05-31 19:32:00.964656 | orchestrator | manager-watchdog-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" watchdog About a minute ago Up About a minute (healthy) 2025-05-31 19:32:00.964673 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2025-05-31 19:32:00.964693 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2025-05-31 19:32:00.964713 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2025-05-31 19:32:00.974887 | orchestrator | ++ semver latest 7.0.0 2025-05-31 19:32:01.037059 | orchestrator | + [[ -1 -ge 0 ]] 2025-05-31 19:32:01.037146 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-05-31 19:32:01.037181 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-05-31 19:32:01.041270 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-05-31 19:32:02.754641 | orchestrator | Registering Redlock._acquired_script 2025-05-31 19:32:02.754760 | orchestrator | Registering Redlock._extend_script 2025-05-31 19:32:02.754784 | orchestrator | Registering Redlock._release_script 2025-05-31 19:32:02.941971 | orchestrator | 2025-05-31 19:32:02 | INFO  | Task 9d68f5a9-0ddc-460c-a64d-78cf8b339b54 (resolvconf) was prepared for execution. 2025-05-31 19:32:02.942125 | orchestrator | 2025-05-31 19:32:02 | INFO  | It takes a moment until task 9d68f5a9-0ddc-460c-a64d-78cf8b339b54 (resolvconf) has been started and output is visible here. 2025-05-31 19:32:06.801298 | orchestrator | 2025-05-31 19:32:06.801414 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-05-31 19:32:06.801431 | orchestrator | 2025-05-31 19:32:06.801443 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-31 19:32:06.801455 | orchestrator | Saturday 31 May 2025 19:32:06 +0000 (0:00:00.145) 0:00:00.145 ********** 2025-05-31 19:32:10.492422 | orchestrator | ok: [testbed-manager] 2025-05-31 19:32:10.493091 | orchestrator | 2025-05-31 19:32:10.494361 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-05-31 19:32:10.495148 | orchestrator | Saturday 31 May 2025 19:32:10 +0000 (0:00:03.695) 0:00:03.841 ********** 2025-05-31 19:32:10.568340 | orchestrator | skipping: [testbed-manager] 2025-05-31 19:32:10.568891 | orchestrator | 2025-05-31 19:32:10.569647 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-05-31 19:32:10.570745 | orchestrator | Saturday 31 May 2025 19:32:10 +0000 (0:00:00.076) 0:00:03.918 ********** 2025-05-31 19:32:10.652860 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-05-31 19:32:10.652931 | orchestrator | 2025-05-31 19:32:10.653863 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-05-31 19:32:10.654832 | orchestrator | Saturday 31 May 2025 19:32:10 +0000 (0:00:00.084) 0:00:04.002 ********** 2025-05-31 19:32:10.727442 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-05-31 19:32:10.727520 | orchestrator | 2025-05-31 19:32:10.731264 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-05-31 19:32:10.731301 | orchestrator | Saturday 31 May 2025 19:32:10 +0000 (0:00:00.073) 0:00:04.075 ********** 2025-05-31 19:32:11.795959 | orchestrator | ok: [testbed-manager] 2025-05-31 19:32:11.796322 | orchestrator | 2025-05-31 19:32:11.797130 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-05-31 19:32:11.798284 | orchestrator | Saturday 31 May 2025 19:32:11 +0000 (0:00:01.068) 0:00:05.144 ********** 2025-05-31 19:32:11.853943 | orchestrator | skipping: [testbed-manager] 2025-05-31 19:32:11.854078 | orchestrator | 2025-05-31 19:32:11.854844 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-05-31 19:32:11.855483 | orchestrator | Saturday 31 May 2025 19:32:11 +0000 (0:00:00.059) 0:00:05.203 ********** 2025-05-31 19:32:12.313683 | orchestrator | ok: [testbed-manager] 2025-05-31 19:32:12.314096 | orchestrator | 2025-05-31 19:32:12.315224 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-05-31 19:32:12.316444 | orchestrator | Saturday 31 May 2025 19:32:12 +0000 (0:00:00.459) 0:00:05.663 ********** 2025-05-31 19:32:12.384041 | orchestrator | skipping: [testbed-manager] 2025-05-31 19:32:12.384711 | orchestrator | 2025-05-31 19:32:12.385868 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-05-31 19:32:12.386835 | orchestrator | Saturday 31 May 2025 19:32:12 +0000 (0:00:00.070) 0:00:05.733 ********** 2025-05-31 19:32:12.898438 | orchestrator | changed: [testbed-manager] 2025-05-31 19:32:12.899182 | orchestrator | 2025-05-31 19:32:12.899826 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-05-31 19:32:12.900645 | orchestrator | Saturday 31 May 2025 19:32:12 +0000 (0:00:00.514) 0:00:06.247 ********** 2025-05-31 19:32:13.885030 | orchestrator | changed: [testbed-manager] 2025-05-31 19:32:13.885721 | orchestrator | 2025-05-31 19:32:13.886428 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-05-31 19:32:13.887072 | orchestrator | Saturday 31 May 2025 19:32:13 +0000 (0:00:00.985) 0:00:07.233 ********** 2025-05-31 19:32:14.809108 | orchestrator | ok: [testbed-manager] 2025-05-31 19:32:14.809913 | orchestrator | 2025-05-31 19:32:14.810562 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-05-31 19:32:14.811642 | orchestrator | Saturday 31 May 2025 19:32:14 +0000 (0:00:00.923) 0:00:08.156 ********** 2025-05-31 19:32:14.886085 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-05-31 19:32:14.887949 | orchestrator | 2025-05-31 19:32:14.887976 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-05-31 19:32:14.889507 | orchestrator | Saturday 31 May 2025 19:32:14 +0000 (0:00:00.079) 0:00:08.236 ********** 2025-05-31 19:32:15.981982 | orchestrator | changed: [testbed-manager] 2025-05-31 19:32:15.982873 | orchestrator | 2025-05-31 19:32:15.983701 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 19:32:15.985040 | orchestrator | 2025-05-31 19:32:15 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-31 19:32:15.985063 | orchestrator | 2025-05-31 19:32:15 | INFO  | Please wait and do not abort execution. 2025-05-31 19:32:15.985935 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-31 19:32:15.987026 | orchestrator | 2025-05-31 19:32:15.987900 | orchestrator | 2025-05-31 19:32:15.988573 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-31 19:32:15.989337 | orchestrator | Saturday 31 May 2025 19:32:15 +0000 (0:00:01.093) 0:00:09.329 ********** 2025-05-31 19:32:15.989746 | orchestrator | =============================================================================== 2025-05-31 19:32:15.990522 | orchestrator | Gathering Facts --------------------------------------------------------- 3.70s 2025-05-31 19:32:15.990884 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.09s 2025-05-31 19:32:15.991731 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.07s 2025-05-31 19:32:15.992229 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 0.99s 2025-05-31 19:32:15.992467 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.92s 2025-05-31 19:32:15.992976 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.51s 2025-05-31 19:32:15.993594 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.46s 2025-05-31 19:32:15.993983 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.08s 2025-05-31 19:32:15.994500 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.08s 2025-05-31 19:32:15.994988 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.08s 2025-05-31 19:32:15.995413 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.07s 2025-05-31 19:32:15.995806 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.07s 2025-05-31 19:32:15.996186 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2025-05-31 19:32:16.399996 | orchestrator | + osism apply sshconfig 2025-05-31 19:32:17.997617 | orchestrator | Registering Redlock._acquired_script 2025-05-31 19:32:17.997713 | orchestrator | Registering Redlock._extend_script 2025-05-31 19:32:17.997723 | orchestrator | Registering Redlock._release_script 2025-05-31 19:32:18.061139 | orchestrator | 2025-05-31 19:32:18 | INFO  | Task 60c5997f-580b-426c-9963-05dca88a13f3 (sshconfig) was prepared for execution. 2025-05-31 19:32:18.061216 | orchestrator | 2025-05-31 19:32:18 | INFO  | It takes a moment until task 60c5997f-580b-426c-9963-05dca88a13f3 (sshconfig) has been started and output is visible here. 2025-05-31 19:32:21.925178 | orchestrator | 2025-05-31 19:32:21.925611 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-05-31 19:32:21.925645 | orchestrator | 2025-05-31 19:32:21.926644 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-05-31 19:32:21.927827 | orchestrator | Saturday 31 May 2025 19:32:21 +0000 (0:00:00.158) 0:00:00.158 ********** 2025-05-31 19:32:22.479919 | orchestrator | ok: [testbed-manager] 2025-05-31 19:32:22.481219 | orchestrator | 2025-05-31 19:32:22.481833 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-05-31 19:32:22.482465 | orchestrator | Saturday 31 May 2025 19:32:22 +0000 (0:00:00.557) 0:00:00.716 ********** 2025-05-31 19:32:22.969571 | orchestrator | changed: [testbed-manager] 2025-05-31 19:32:22.970472 | orchestrator | 2025-05-31 19:32:22.972462 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-05-31 19:32:22.972555 | orchestrator | Saturday 31 May 2025 19:32:22 +0000 (0:00:00.489) 0:00:01.205 ********** 2025-05-31 19:32:28.512087 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-05-31 19:32:28.513437 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-05-31 19:32:28.513474 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-05-31 19:32:28.514228 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-05-31 19:32:28.514650 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-05-31 19:32:28.515354 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-05-31 19:32:28.515791 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-05-31 19:32:28.516459 | orchestrator | 2025-05-31 19:32:28.516903 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-05-31 19:32:28.517416 | orchestrator | Saturday 31 May 2025 19:32:28 +0000 (0:00:05.543) 0:00:06.749 ********** 2025-05-31 19:32:28.570837 | orchestrator | skipping: [testbed-manager] 2025-05-31 19:32:28.570968 | orchestrator | 2025-05-31 19:32:28.571804 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-05-31 19:32:28.573229 | orchestrator | Saturday 31 May 2025 19:32:28 +0000 (0:00:00.059) 0:00:06.808 ********** 2025-05-31 19:32:29.130834 | orchestrator | changed: [testbed-manager] 2025-05-31 19:32:29.130924 | orchestrator | 2025-05-31 19:32:29.130940 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 19:32:29.130984 | orchestrator | 2025-05-31 19:32:29 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-31 19:32:29.130999 | orchestrator | 2025-05-31 19:32:29 | INFO  | Please wait and do not abort execution. 2025-05-31 19:32:29.131979 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-31 19:32:29.133088 | orchestrator | 2025-05-31 19:32:29.134492 | orchestrator | 2025-05-31 19:32:29.134895 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-31 19:32:29.135518 | orchestrator | Saturday 31 May 2025 19:32:29 +0000 (0:00:00.554) 0:00:07.363 ********** 2025-05-31 19:32:29.136405 | orchestrator | =============================================================================== 2025-05-31 19:32:29.136839 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.54s 2025-05-31 19:32:29.137642 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.56s 2025-05-31 19:32:29.138332 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.55s 2025-05-31 19:32:29.139032 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.49s 2025-05-31 19:32:29.139339 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.06s 2025-05-31 19:32:29.551783 | orchestrator | + osism apply known-hosts 2025-05-31 19:32:31.155786 | orchestrator | Registering Redlock._acquired_script 2025-05-31 19:32:31.155887 | orchestrator | Registering Redlock._extend_script 2025-05-31 19:32:31.155901 | orchestrator | Registering Redlock._release_script 2025-05-31 19:32:31.209382 | orchestrator | 2025-05-31 19:32:31 | INFO  | Task 7e6dc740-468c-43b0-9e2b-6189f4069419 (known-hosts) was prepared for execution. 2025-05-31 19:32:31.209458 | orchestrator | 2025-05-31 19:32:31 | INFO  | It takes a moment until task 7e6dc740-468c-43b0-9e2b-6189f4069419 (known-hosts) has been started and output is visible here. 2025-05-31 19:32:34.898355 | orchestrator | 2025-05-31 19:32:34.898543 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-05-31 19:32:34.898817 | orchestrator | 2025-05-31 19:32:34.899110 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-05-31 19:32:34.900482 | orchestrator | Saturday 31 May 2025 19:32:34 +0000 (0:00:00.120) 0:00:00.120 ********** 2025-05-31 19:32:40.496148 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-05-31 19:32:40.498082 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-05-31 19:32:40.499145 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-05-31 19:32:40.500533 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-05-31 19:32:40.501550 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-05-31 19:32:40.502062 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-05-31 19:32:40.503100 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-05-31 19:32:40.503393 | orchestrator | 2025-05-31 19:32:40.504194 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-05-31 19:32:40.504897 | orchestrator | Saturday 31 May 2025 19:32:40 +0000 (0:00:05.597) 0:00:05.718 ********** 2025-05-31 19:32:40.658967 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-05-31 19:32:40.660467 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-05-31 19:32:40.660499 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-05-31 19:32:40.661379 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-05-31 19:32:40.662336 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-05-31 19:32:40.662527 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-05-31 19:32:40.663612 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-05-31 19:32:40.663794 | orchestrator | 2025-05-31 19:32:40.664442 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-31 19:32:40.664851 | orchestrator | Saturday 31 May 2025 19:32:40 +0000 (0:00:00.165) 0:00:05.883 ********** 2025-05-31 19:32:41.771841 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIA3WaUHR4rDRm0CBAlha3maPeHCxf0252fQrhkTwR+uI) 2025-05-31 19:32:41.771952 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC6qP/l+PMlCY8bQ2KegmZ51HjT7jgAM5lqlcXjPfug/zksoIk0yAxM9MqkicRH+G33MhQ61nJLHZOfIp50arI6FXwEHZo7ppdrd8+0iF24E2OiqyURJM58XI73EteYeT3V/wCG1owAkM503s2pVclHYJLT3jFzi+w0qU7T4gmksIeRbKADKP4kAF04KaYjfHpQBlF3JXHNKgyRXiPLcrPYndDlwCCk9c7W7nyjy4Vm3yETA5Hj/3X0EEptvoNzmVZb08i7cIwBH3v3H3Z/5JmeVyc6B21EIjoHxAtoHNRO+xWU6kbFvAKQ9t3nbNm3oJ1b1CZhbtKC9uqE1Wc62C1sV3cWBmo5RcickcizYrK2Lg7ouukXm1W/K9WvALb3Hqt2oeTZYXTzBUFjSZwZF5CaC1OLQK5340JDhOAF3sIiCPO7vHL/AO6qtA7Rq83ur6HNcYOGZ1+im8iFANkEJOgZPM+AB8qIGReXCgxoY+ozN/fxOSHjV1l4AIPYc6aPp/0=) 2025-05-31 19:32:41.772512 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHD06eLTVSIWEfpc7UGzmKfgqoPi0TGT6xFfUwihEnjyP7TgswIK46dcgtQFV35LESbAUwWqkV1VdKM+wxk2ub4=) 2025-05-31 19:32:41.773404 | orchestrator | 2025-05-31 19:32:41.774155 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-31 19:32:41.774958 | orchestrator | Saturday 31 May 2025 19:32:41 +0000 (0:00:01.111) 0:00:06.994 ********** 2025-05-31 19:32:42.786862 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDa76iSc3QhfxlVislbYBC7EIFooCtSje3cACdgSum6LJGLQhAHnv7le7TtSvYD1p2ZYrjAgNU2fSIOHH3P+DtZyB6QKhP+r9Ul25x+6xb0WpzMJzWM/e0njkGnWpZaL+wdB88/bmFAnBubdcTl2kyjhSjjHahEXr46rE7uBQxrjXkTOSjYleFhLip40+fKA0Xbahjf80g7Yx541zwAckEwGOHIxZOf/7EOO3DQPlQk3HSkqQiZRAi/+S8kK4yI3valQPUGNbGh3pCt9hks66PP5vZq8H1/YfqpLj81P+C3K61QpLD2GzX1pxGf5dl3+4BqEmJXSVYNdiXTjWq8EEBP3biDCJ7hcEyJNFVAXdUlociik0h1M6rkv5WjFmqjpZ5u9Hnc6prB+8Nb32zYBTEkBQ7zAQKQKwEUFaoZYVi4ZreK9E8oV/Zlk5xg9PUojVVdCQry4G8odY5pHYkZlmbiUDDXq++0TqyncfYPMfeUTfViNwz37biAEYLMq91CQAE=) 2025-05-31 19:32:42.787080 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBORpUKHA7wXS9xXyejLc0VZVKMN3wfA7Nx6YijBKg9qaFMfiIcJGatZdOxZK7sWekIDlz1qC4plDidCMR2AO7Xg=) 2025-05-31 19:32:42.788187 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDGBHry0M6hHnYGS+3JL3bzYTZCLqu1LOyKLYMMXOqYx) 2025-05-31 19:32:42.789040 | orchestrator | 2025-05-31 19:32:42.790314 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-31 19:32:42.791078 | orchestrator | Saturday 31 May 2025 19:32:42 +0000 (0:00:01.011) 0:00:08.005 ********** 2025-05-31 19:32:43.829348 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCkWHRoQFsCPIZoMUhVMA6hPeA6sCLHT8HIwjtFeM/LRjW55mcAfBu7GrjMeLEtGpGGV/+vXFWWmyAtUjZYOTW89zx2roVjniG4hFJEHSwfAXLnhpkoqjaaIFUSS+dyI+ays/QupfNF8LAXMHTY8C+P0D3g3oGxPStGaejvEDAvg6BWc6cLPQTtWrp352Lo14VSkjiMZ6yO8828knXAvAR06e9S8WLFLi7Bo+KszKeMHkq6xs6dD1l9RMZOUS4GGCSykwTaXXByoLDEvCzvZzJBC1EQuUX9lvPy3vHo41qlKHE6O5Ld1w1zhJt5TMObB2Do6eDbUo8KECUF8PzxXi1Y/kppAvnb2z9iBxs2eF4byIag9mQTm9jAR4tlj8UjrwkMUrfKB+2SFaibe+O4MqxCERjosEMDx+3WuSubTX3/zMliPjSL7hTC8UdYt8F6LmHWht0Djmi7IGJqrzIOyGy4w9gel2wsWvCBGWJtBr5nqH5QN7EZqc7RVg8wVrcra6c=) 2025-05-31 19:32:43.829507 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEibMw07YjY9YJrNXmYnFF3ijXBCVRzTgpibfFyHMPeDWcYSPFitonlGB71kC07YmzFsI7dhdm+2C6GYylgf0ZY=) 2025-05-31 19:32:43.829794 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPuY4IXc/o/G86X3R+G/E7bzxHqYPsyv2C9T5oY/eKO4) 2025-05-31 19:32:43.830612 | orchestrator | 2025-05-31 19:32:43.831245 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-31 19:32:43.832526 | orchestrator | Saturday 31 May 2025 19:32:43 +0000 (0:00:01.045) 0:00:09.051 ********** 2025-05-31 19:32:44.825573 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCbGGYuCe/kjBYBBcSL0NVKwvG5BgKOTd3BkuorNQWqWg7vdCeT8Z1/t/UY4x2mCPLSQ809qfRVPlwzh/DQgDGJo2HjSNXbn/ffGvXogpFWdwVy9tK5ISQjiigIIwFD+WF0b/Dd8pW1ZKSgzbYYAyBoWU+4Kjed2iUSH141y8Z1j38IgU6HBe2h0sGIv9ijlHhwGiHMtiNN3A6j4dhkyXiZdon3qAl7ZvBHRVPeHc5LAEwniisJFIlp7Fbok48quyMjkwPMSlBJ8vygaMpnRopg8QatSAjQLNp0ZXU3pIMB40opRIu1Uul9F4rtUfDR5i3nd6kGlIJ/rozYs9yncj848a6lEyncrukIE70R7UAHe7JPP63v5j7gX9ggtw26zm8Xm716znnZY3D7IwQvC9lotdd59XJEgftV3ivClOLSVMLs/4RWM7vAc7mciCq4d8mOst302z0KOef7TSp7ItSZ6w5rPZzkWb/8Ak5zep3cFH44wmtUWuR4zrRYNv+3+Hs=) 2025-05-31 19:32:44.825819 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGnWmrdzLL7bIBznL+iRftytvJjzm6XhqtwB8vuxATy1fSkY6VxJVcHh5qX4+Ii5Ioo0KObkqXkefR9ckVuh+O0=) 2025-05-31 19:32:44.826364 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAxYpWtZ7+DbEuq1rfR/1frj0/oDC0aQmn9xSwYHKTf2) 2025-05-31 19:32:44.827114 | orchestrator | 2025-05-31 19:32:44.828382 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-31 19:32:44.828727 | orchestrator | Saturday 31 May 2025 19:32:44 +0000 (0:00:00.996) 0:00:10.048 ********** 2025-05-31 19:32:45.838735 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCJjA5ESNpg8Pfaw6ckeWpMdbzKfr0U0NswEQoutFRyoINEwHVV3eEPTPV0Z3NND9hPgY/LiTM5Ct7XNclJu1Q+2VU7bZ4aKa93F7cCAyANJyprBXdDB31FgmudCVoLSsXHOngaNDemD0wCP/WdmHBr5iQuP+5/KuXaMsQZdGhpxW0F1xYdsD1TpxcGCh2uP2a1GW1EpRJmbP00AxIdVJqK0gUTZ301dIiL+plmhkc/k6u6LQQFI/yItu3qidKautbnagvq/COUsh2lQVFpJzyzde4rvwxTFxMcoSxf21aFAmE38Dzsmbib7sePPGmADdhzcmEelZI8qjLXQBs2PhnJPYsrlAmzVb2Pho8Kz+BQIyDSaMljnA6tjZVj7+G0nO8ZRUSCsnDCJr0ExSiCg8nZKy+vM3OFOyABfTcpHvTyF4L7ZTS981DPB9RMlGfc5gWj7wN2yqNI+VJYMhnJDgoGWU5QEPJLVo01x+7LkV5oIvb5g+V/tNJusw5CgUHn43s=) 2025-05-31 19:32:45.839916 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPnqX5JQ4NDkjShU2AYfNVVQtO/PWPZecyP1/aTPlN6eowFHsLcvl5rdP9E6yKRb4BFBQzbjg0Wj34/sfQ4psg8=) 2025-05-31 19:32:45.841053 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBh63Ip+CFDI9bkom8y8rUYt8FFkWYPFGSSuTXPu5v4p) 2025-05-31 19:32:45.841354 | orchestrator | 2025-05-31 19:32:45.842360 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-31 19:32:45.843097 | orchestrator | Saturday 31 May 2025 19:32:45 +0000 (0:00:01.013) 0:00:11.061 ********** 2025-05-31 19:32:46.839911 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAJJHTEz27t4JNmXCtXyo1h+QABfDhfMJFhqVkCRLIBx) 2025-05-31 19:32:46.840400 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCtPZAP5AEW1lNCRdb6Ld3SzRzKVkAH55vc+n/Ln9pn6eakeKchXZqjZYjV/IsNCggOisgWY99GwUXhGOUbZlSvNCKfK1Zfsqr/iV4ajI3ZEjkT16U2UT1+XS2lKMw+VpHWoEnxfk5b2XjbClLOMHIsJQz7TqDi+FxrhSBdyYqK1/kA+RV8ttmGXsBcXbpt6ArLGJuvsSKvuwhGKudyWBBXvvLloLRz15AcJgdiNCbSWWS0WbFEIb4usTkJ3tHVBWT1sO8SdGGLxv0VX8foJHhYsoQqKYn2gip7496hHAyTKB/26dJGFg9vcmLjQzVUAjbcGMO4Q3LCO8NQYo+5eaAsJaYxnYI2mbR82t5BECilOO3YIrNRjV3CdG15b/bWC/VGXEkOjLWGeDQX2hc3hbdhN0nTl+YYrC/MgMYoZdLhuDzN6C8BHnqne2foc03L6Y+WQaMOEufczLbB4vOihVP2MMps9X5lD3oe5AH2yPZjQlBHp4NMSRiY9oQzQzS91vM=) 2025-05-31 19:32:46.841432 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEmSKy5IkP3W6spSSQ4jXBApspsUv1fz3VYXPWlVgEp9a4Ku1rAe7EKiVhQ5cbTyOv5/iBFjbclMRU6ILrWZpH0=) 2025-05-31 19:32:46.842201 | orchestrator | 2025-05-31 19:32:46.842957 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-31 19:32:46.843717 | orchestrator | Saturday 31 May 2025 19:32:46 +0000 (0:00:01.000) 0:00:12.061 ********** 2025-05-31 19:32:47.828438 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDU4WYulKnwZKCXAOYid6jk1peVGIK/fFzWnO6keSBl2eJGc9+oFOaCj99dNIZwYy6xPx5V86ANxK3ja/4ZdncKPxEhgQhH4xsYBRZBuaVjTUoXCoYb8xwsvWE+dSNVqKWMY0jhQkup9vC+doJZdg+fFlxCMhf/bF2bqOVoeY0sNdw1kNkIwtI4tM1TVcwWAcmh0Ng6WAkbs2tXEI2OWEoG9QWNbTVTvd0dlrOjfQ4WemuDlKWMSwej63wLrFR3ozPBYdhtNKCeWnmEpLZeu03eOQE/nhOZMe5Mj+e2QMM2mRyyHFh/U09wzVNjvvIwi07WNY8ZddvmE+E99tXjL2t+1WsWX7hssd9NrsyJK/mRgK9BTXCbHoUp0Be/6djgk3dXf3jAU7r1XDihU186wus4P2hBfCqBPWs+YNFbY19wTbua8WLIp9xDFYPtsKuUNiCOrKZCsj7iLgr0P+9dsexM5spbx7Sh5d4fF4RZKegTH6u5/zjaqAAYD3vrAzBorS0=) 2025-05-31 19:32:47.828613 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJz4GvYRc8ju7QdEDs8QWuMcOVLOuGgKmwcIBAHmydRE) 2025-05-31 19:32:47.829647 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFhAJ3jlqID4WymrI+Vhi13TgBp7nX8VvMXECqwpwS1C/yc/kUzu4jv+INNH6r0OA1R81WJ+RywuClGvuO1LeOE=) 2025-05-31 19:32:47.830532 | orchestrator | 2025-05-31 19:32:47.831166 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-05-31 19:32:47.832784 | orchestrator | Saturday 31 May 2025 19:32:47 +0000 (0:00:00.988) 0:00:13.050 ********** 2025-05-31 19:32:52.933761 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-05-31 19:32:52.933885 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-05-31 19:32:52.934882 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-05-31 19:32:52.936243 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-05-31 19:32:52.937698 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-05-31 19:32:52.937991 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-05-31 19:32:52.938435 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-05-31 19:32:52.938855 | orchestrator | 2025-05-31 19:32:52.939371 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-05-31 19:32:52.939671 | orchestrator | Saturday 31 May 2025 19:32:52 +0000 (0:00:05.105) 0:00:18.155 ********** 2025-05-31 19:32:53.089384 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-05-31 19:32:53.089825 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-05-31 19:32:53.091321 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-05-31 19:32:53.091915 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-05-31 19:32:53.093294 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-05-31 19:32:53.093865 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-05-31 19:32:53.094948 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-05-31 19:32:53.096042 | orchestrator | 2025-05-31 19:32:53.096646 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-31 19:32:53.097370 | orchestrator | Saturday 31 May 2025 19:32:53 +0000 (0:00:00.157) 0:00:18.313 ********** 2025-05-31 19:32:54.126780 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIA3WaUHR4rDRm0CBAlha3maPeHCxf0252fQrhkTwR+uI) 2025-05-31 19:32:54.127564 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC6qP/l+PMlCY8bQ2KegmZ51HjT7jgAM5lqlcXjPfug/zksoIk0yAxM9MqkicRH+G33MhQ61nJLHZOfIp50arI6FXwEHZo7ppdrd8+0iF24E2OiqyURJM58XI73EteYeT3V/wCG1owAkM503s2pVclHYJLT3jFzi+w0qU7T4gmksIeRbKADKP4kAF04KaYjfHpQBlF3JXHNKgyRXiPLcrPYndDlwCCk9c7W7nyjy4Vm3yETA5Hj/3X0EEptvoNzmVZb08i7cIwBH3v3H3Z/5JmeVyc6B21EIjoHxAtoHNRO+xWU6kbFvAKQ9t3nbNm3oJ1b1CZhbtKC9uqE1Wc62C1sV3cWBmo5RcickcizYrK2Lg7ouukXm1W/K9WvALb3Hqt2oeTZYXTzBUFjSZwZF5CaC1OLQK5340JDhOAF3sIiCPO7vHL/AO6qtA7Rq83ur6HNcYOGZ1+im8iFANkEJOgZPM+AB8qIGReXCgxoY+ozN/fxOSHjV1l4AIPYc6aPp/0=) 2025-05-31 19:32:54.127941 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBHD06eLTVSIWEfpc7UGzmKfgqoPi0TGT6xFfUwihEnjyP7TgswIK46dcgtQFV35LESbAUwWqkV1VdKM+wxk2ub4=) 2025-05-31 19:32:54.128545 | orchestrator | 2025-05-31 19:32:54.129345 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-31 19:32:54.129763 | orchestrator | Saturday 31 May 2025 19:32:54 +0000 (0:00:01.035) 0:00:19.349 ********** 2025-05-31 19:32:55.130373 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDa76iSc3QhfxlVislbYBC7EIFooCtSje3cACdgSum6LJGLQhAHnv7le7TtSvYD1p2ZYrjAgNU2fSIOHH3P+DtZyB6QKhP+r9Ul25x+6xb0WpzMJzWM/e0njkGnWpZaL+wdB88/bmFAnBubdcTl2kyjhSjjHahEXr46rE7uBQxrjXkTOSjYleFhLip40+fKA0Xbahjf80g7Yx541zwAckEwGOHIxZOf/7EOO3DQPlQk3HSkqQiZRAi/+S8kK4yI3valQPUGNbGh3pCt9hks66PP5vZq8H1/YfqpLj81P+C3K61QpLD2GzX1pxGf5dl3+4BqEmJXSVYNdiXTjWq8EEBP3biDCJ7hcEyJNFVAXdUlociik0h1M6rkv5WjFmqjpZ5u9Hnc6prB+8Nb32zYBTEkBQ7zAQKQKwEUFaoZYVi4ZreK9E8oV/Zlk5xg9PUojVVdCQry4G8odY5pHYkZlmbiUDDXq++0TqyncfYPMfeUTfViNwz37biAEYLMq91CQAE=) 2025-05-31 19:32:55.130500 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBORpUKHA7wXS9xXyejLc0VZVKMN3wfA7Nx6YijBKg9qaFMfiIcJGatZdOxZK7sWekIDlz1qC4plDidCMR2AO7Xg=) 2025-05-31 19:32:55.131307 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDGBHry0M6hHnYGS+3JL3bzYTZCLqu1LOyKLYMMXOqYx) 2025-05-31 19:32:55.132322 | orchestrator | 2025-05-31 19:32:55.133135 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-31 19:32:55.133237 | orchestrator | Saturday 31 May 2025 19:32:55 +0000 (0:00:01.003) 0:00:20.353 ********** 2025-05-31 19:32:56.160201 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEibMw07YjY9YJrNXmYnFF3ijXBCVRzTgpibfFyHMPeDWcYSPFitonlGB71kC07YmzFsI7dhdm+2C6GYylgf0ZY=) 2025-05-31 19:32:56.160374 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCkWHRoQFsCPIZoMUhVMA6hPeA6sCLHT8HIwjtFeM/LRjW55mcAfBu7GrjMeLEtGpGGV/+vXFWWmyAtUjZYOTW89zx2roVjniG4hFJEHSwfAXLnhpkoqjaaIFUSS+dyI+ays/QupfNF8LAXMHTY8C+P0D3g3oGxPStGaejvEDAvg6BWc6cLPQTtWrp352Lo14VSkjiMZ6yO8828knXAvAR06e9S8WLFLi7Bo+KszKeMHkq6xs6dD1l9RMZOUS4GGCSykwTaXXByoLDEvCzvZzJBC1EQuUX9lvPy3vHo41qlKHE6O5Ld1w1zhJt5TMObB2Do6eDbUo8KECUF8PzxXi1Y/kppAvnb2z9iBxs2eF4byIag9mQTm9jAR4tlj8UjrwkMUrfKB+2SFaibe+O4MqxCERjosEMDx+3WuSubTX3/zMliPjSL7hTC8UdYt8F6LmHWht0Djmi7IGJqrzIOyGy4w9gel2wsWvCBGWJtBr5nqH5QN7EZqc7RVg8wVrcra6c=) 2025-05-31 19:32:56.160395 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPuY4IXc/o/G86X3R+G/E7bzxHqYPsyv2C9T5oY/eKO4) 2025-05-31 19:32:56.160504 | orchestrator | 2025-05-31 19:32:56.161165 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-31 19:32:56.161202 | orchestrator | Saturday 31 May 2025 19:32:56 +0000 (0:00:01.028) 0:00:21.381 ********** 2025-05-31 19:32:57.152366 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGnWmrdzLL7bIBznL+iRftytvJjzm6XhqtwB8vuxATy1fSkY6VxJVcHh5qX4+Ii5Ioo0KObkqXkefR9ckVuh+O0=) 2025-05-31 19:32:57.154159 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCbGGYuCe/kjBYBBcSL0NVKwvG5BgKOTd3BkuorNQWqWg7vdCeT8Z1/t/UY4x2mCPLSQ809qfRVPlwzh/DQgDGJo2HjSNXbn/ffGvXogpFWdwVy9tK5ISQjiigIIwFD+WF0b/Dd8pW1ZKSgzbYYAyBoWU+4Kjed2iUSH141y8Z1j38IgU6HBe2h0sGIv9ijlHhwGiHMtiNN3A6j4dhkyXiZdon3qAl7ZvBHRVPeHc5LAEwniisJFIlp7Fbok48quyMjkwPMSlBJ8vygaMpnRopg8QatSAjQLNp0ZXU3pIMB40opRIu1Uul9F4rtUfDR5i3nd6kGlIJ/rozYs9yncj848a6lEyncrukIE70R7UAHe7JPP63v5j7gX9ggtw26zm8Xm716znnZY3D7IwQvC9lotdd59XJEgftV3ivClOLSVMLs/4RWM7vAc7mciCq4d8mOst302z0KOef7TSp7ItSZ6w5rPZzkWb/8Ak5zep3cFH44wmtUWuR4zrRYNv+3+Hs=) 2025-05-31 19:32:57.154921 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAxYpWtZ7+DbEuq1rfR/1frj0/oDC0aQmn9xSwYHKTf2) 2025-05-31 19:32:57.155597 | orchestrator | 2025-05-31 19:32:57.156571 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-31 19:32:57.157471 | orchestrator | Saturday 31 May 2025 19:32:57 +0000 (0:00:00.993) 0:00:22.375 ********** 2025-05-31 19:32:58.176890 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBh63Ip+CFDI9bkom8y8rUYt8FFkWYPFGSSuTXPu5v4p) 2025-05-31 19:32:58.177185 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCJjA5ESNpg8Pfaw6ckeWpMdbzKfr0U0NswEQoutFRyoINEwHVV3eEPTPV0Z3NND9hPgY/LiTM5Ct7XNclJu1Q+2VU7bZ4aKa93F7cCAyANJyprBXdDB31FgmudCVoLSsXHOngaNDemD0wCP/WdmHBr5iQuP+5/KuXaMsQZdGhpxW0F1xYdsD1TpxcGCh2uP2a1GW1EpRJmbP00AxIdVJqK0gUTZ301dIiL+plmhkc/k6u6LQQFI/yItu3qidKautbnagvq/COUsh2lQVFpJzyzde4rvwxTFxMcoSxf21aFAmE38Dzsmbib7sePPGmADdhzcmEelZI8qjLXQBs2PhnJPYsrlAmzVb2Pho8Kz+BQIyDSaMljnA6tjZVj7+G0nO8ZRUSCsnDCJr0ExSiCg8nZKy+vM3OFOyABfTcpHvTyF4L7ZTS981DPB9RMlGfc5gWj7wN2yqNI+VJYMhnJDgoGWU5QEPJLVo01x+7LkV5oIvb5g+V/tNJusw5CgUHn43s=) 2025-05-31 19:32:58.177737 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPnqX5JQ4NDkjShU2AYfNVVQtO/PWPZecyP1/aTPlN6eowFHsLcvl5rdP9E6yKRb4BFBQzbjg0Wj34/sfQ4psg8=) 2025-05-31 19:32:58.178180 | orchestrator | 2025-05-31 19:32:58.179379 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-31 19:32:58.180377 | orchestrator | Saturday 31 May 2025 19:32:58 +0000 (0:00:01.023) 0:00:23.398 ********** 2025-05-31 19:32:59.208610 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAJJHTEz27t4JNmXCtXyo1h+QABfDhfMJFhqVkCRLIBx) 2025-05-31 19:32:59.209541 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCtPZAP5AEW1lNCRdb6Ld3SzRzKVkAH55vc+n/Ln9pn6eakeKchXZqjZYjV/IsNCggOisgWY99GwUXhGOUbZlSvNCKfK1Zfsqr/iV4ajI3ZEjkT16U2UT1+XS2lKMw+VpHWoEnxfk5b2XjbClLOMHIsJQz7TqDi+FxrhSBdyYqK1/kA+RV8ttmGXsBcXbpt6ArLGJuvsSKvuwhGKudyWBBXvvLloLRz15AcJgdiNCbSWWS0WbFEIb4usTkJ3tHVBWT1sO8SdGGLxv0VX8foJHhYsoQqKYn2gip7496hHAyTKB/26dJGFg9vcmLjQzVUAjbcGMO4Q3LCO8NQYo+5eaAsJaYxnYI2mbR82t5BECilOO3YIrNRjV3CdG15b/bWC/VGXEkOjLWGeDQX2hc3hbdhN0nTl+YYrC/MgMYoZdLhuDzN6C8BHnqne2foc03L6Y+WQaMOEufczLbB4vOihVP2MMps9X5lD3oe5AH2yPZjQlBHp4NMSRiY9oQzQzS91vM=) 2025-05-31 19:32:59.210892 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEmSKy5IkP3W6spSSQ4jXBApspsUv1fz3VYXPWlVgEp9a4Ku1rAe7EKiVhQ5cbTyOv5/iBFjbclMRU6ILrWZpH0=) 2025-05-31 19:32:59.211280 | orchestrator | 2025-05-31 19:32:59.212298 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-31 19:32:59.212552 | orchestrator | Saturday 31 May 2025 19:32:59 +0000 (0:00:01.032) 0:00:24.431 ********** 2025-05-31 19:33:00.258426 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJz4GvYRc8ju7QdEDs8QWuMcOVLOuGgKmwcIBAHmydRE) 2025-05-31 19:33:00.258607 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDU4WYulKnwZKCXAOYid6jk1peVGIK/fFzWnO6keSBl2eJGc9+oFOaCj99dNIZwYy6xPx5V86ANxK3ja/4ZdncKPxEhgQhH4xsYBRZBuaVjTUoXCoYb8xwsvWE+dSNVqKWMY0jhQkup9vC+doJZdg+fFlxCMhf/bF2bqOVoeY0sNdw1kNkIwtI4tM1TVcwWAcmh0Ng6WAkbs2tXEI2OWEoG9QWNbTVTvd0dlrOjfQ4WemuDlKWMSwej63wLrFR3ozPBYdhtNKCeWnmEpLZeu03eOQE/nhOZMe5Mj+e2QMM2mRyyHFh/U09wzVNjvvIwi07WNY8ZddvmE+E99tXjL2t+1WsWX7hssd9NrsyJK/mRgK9BTXCbHoUp0Be/6djgk3dXf3jAU7r1XDihU186wus4P2hBfCqBPWs+YNFbY19wTbua8WLIp9xDFYPtsKuUNiCOrKZCsj7iLgr0P+9dsexM5spbx7Sh5d4fF4RZKegTH6u5/zjaqAAYD3vrAzBorS0=) 2025-05-31 19:33:00.258707 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFhAJ3jlqID4WymrI+Vhi13TgBp7nX8VvMXECqwpwS1C/yc/kUzu4jv+INNH6r0OA1R81WJ+RywuClGvuO1LeOE=) 2025-05-31 19:33:00.259525 | orchestrator | 2025-05-31 19:33:00.260041 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-05-31 19:33:00.260690 | orchestrator | Saturday 31 May 2025 19:33:00 +0000 (0:00:01.049) 0:00:25.480 ********** 2025-05-31 19:33:00.410236 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-05-31 19:33:00.410968 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-05-31 19:33:00.411238 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-05-31 19:33:00.412336 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-05-31 19:33:00.413293 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-05-31 19:33:00.413940 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-05-31 19:33:00.414683 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-05-31 19:33:00.415534 | orchestrator | skipping: [testbed-manager] 2025-05-31 19:33:00.415843 | orchestrator | 2025-05-31 19:33:00.416480 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-05-31 19:33:00.416940 | orchestrator | Saturday 31 May 2025 19:33:00 +0000 (0:00:00.153) 0:00:25.634 ********** 2025-05-31 19:33:00.480754 | orchestrator | skipping: [testbed-manager] 2025-05-31 19:33:00.481171 | orchestrator | 2025-05-31 19:33:00.482347 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-05-31 19:33:00.483353 | orchestrator | Saturday 31 May 2025 19:33:00 +0000 (0:00:00.070) 0:00:25.704 ********** 2025-05-31 19:33:00.528773 | orchestrator | skipping: [testbed-manager] 2025-05-31 19:33:00.529718 | orchestrator | 2025-05-31 19:33:00.530884 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-05-31 19:33:00.531708 | orchestrator | Saturday 31 May 2025 19:33:00 +0000 (0:00:00.047) 0:00:25.752 ********** 2025-05-31 19:33:01.160533 | orchestrator | changed: [testbed-manager] 2025-05-31 19:33:01.160765 | orchestrator | 2025-05-31 19:33:01.161485 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 19:33:01.161511 | orchestrator | 2025-05-31 19:33:01 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-31 19:33:01.161524 | orchestrator | 2025-05-31 19:33:01 | INFO  | Please wait and do not abort execution. 2025-05-31 19:33:01.162291 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-31 19:33:01.162698 | orchestrator | 2025-05-31 19:33:01.162720 | orchestrator | 2025-05-31 19:33:01.162732 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-31 19:33:01.164435 | orchestrator | Saturday 31 May 2025 19:33:01 +0000 (0:00:00.629) 0:00:26.381 ********** 2025-05-31 19:33:01.164459 | orchestrator | =============================================================================== 2025-05-31 19:33:01.164808 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 5.60s 2025-05-31 19:33:01.164829 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.11s 2025-05-31 19:33:01.165396 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2025-05-31 19:33:01.166212 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-05-31 19:33:01.166465 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-05-31 19:33:01.166894 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-05-31 19:33:01.167180 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-05-31 19:33:01.167677 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-05-31 19:33:01.167697 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2025-05-31 19:33:01.168036 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2025-05-31 19:33:01.168446 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2025-05-31 19:33:01.168847 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2025-05-31 19:33:01.169153 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2025-05-31 19:33:01.169649 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2025-05-31 19:33:01.169892 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.99s 2025-05-31 19:33:01.170405 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.99s 2025-05-31 19:33:01.170427 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.63s 2025-05-31 19:33:01.170668 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.17s 2025-05-31 19:33:01.171089 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.16s 2025-05-31 19:33:01.171110 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.15s 2025-05-31 19:33:01.608714 | orchestrator | + osism apply squid 2025-05-31 19:33:03.194162 | orchestrator | Registering Redlock._acquired_script 2025-05-31 19:33:03.194345 | orchestrator | Registering Redlock._extend_script 2025-05-31 19:33:03.194362 | orchestrator | Registering Redlock._release_script 2025-05-31 19:33:03.250655 | orchestrator | 2025-05-31 19:33:03 | INFO  | Task 9f3e84d7-c59c-4860-8f0e-9d3c8959ef89 (squid) was prepared for execution. 2025-05-31 19:33:03.250736 | orchestrator | 2025-05-31 19:33:03 | INFO  | It takes a moment until task 9f3e84d7-c59c-4860-8f0e-9d3c8959ef89 (squid) has been started and output is visible here. 2025-05-31 19:33:06.767553 | orchestrator | 2025-05-31 19:33:06.767748 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-05-31 19:33:06.769136 | orchestrator | 2025-05-31 19:33:06.770701 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-05-31 19:33:06.771703 | orchestrator | Saturday 31 May 2025 19:33:06 +0000 (0:00:00.120) 0:00:00.120 ********** 2025-05-31 19:33:06.830830 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-05-31 19:33:06.830903 | orchestrator | 2025-05-31 19:33:06.831605 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-05-31 19:33:06.832469 | orchestrator | Saturday 31 May 2025 19:33:06 +0000 (0:00:00.065) 0:00:00.186 ********** 2025-05-31 19:33:07.875430 | orchestrator | ok: [testbed-manager] 2025-05-31 19:33:07.875575 | orchestrator | 2025-05-31 19:33:07.875655 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-05-31 19:33:07.876257 | orchestrator | Saturday 31 May 2025 19:33:07 +0000 (0:00:01.042) 0:00:01.229 ********** 2025-05-31 19:33:08.841681 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-05-31 19:33:08.841886 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-05-31 19:33:08.842430 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-05-31 19:33:08.843083 | orchestrator | 2025-05-31 19:33:08.843830 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-05-31 19:33:08.844556 | orchestrator | Saturday 31 May 2025 19:33:08 +0000 (0:00:00.965) 0:00:02.195 ********** 2025-05-31 19:33:09.716546 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-05-31 19:33:09.716677 | orchestrator | 2025-05-31 19:33:09.717129 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-05-31 19:33:09.717975 | orchestrator | Saturday 31 May 2025 19:33:09 +0000 (0:00:00.876) 0:00:03.071 ********** 2025-05-31 19:33:10.028914 | orchestrator | ok: [testbed-manager] 2025-05-31 19:33:10.029074 | orchestrator | 2025-05-31 19:33:10.029265 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-05-31 19:33:10.029943 | orchestrator | Saturday 31 May 2025 19:33:10 +0000 (0:00:00.311) 0:00:03.383 ********** 2025-05-31 19:33:10.851379 | orchestrator | changed: [testbed-manager] 2025-05-31 19:33:10.851600 | orchestrator | 2025-05-31 19:33:10.852443 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-05-31 19:33:10.853754 | orchestrator | Saturday 31 May 2025 19:33:10 +0000 (0:00:00.823) 0:00:04.206 ********** 2025-05-31 19:33:42.241852 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-05-31 19:33:42.241960 | orchestrator | ok: [testbed-manager] 2025-05-31 19:33:42.241972 | orchestrator | 2025-05-31 19:33:42.242047 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-05-31 19:33:42.243653 | orchestrator | Saturday 31 May 2025 19:33:42 +0000 (0:00:31.385) 0:00:35.592 ********** 2025-05-31 19:33:54.664856 | orchestrator | changed: [testbed-manager] 2025-05-31 19:33:54.665275 | orchestrator | 2025-05-31 19:33:54.665592 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-05-31 19:33:54.667691 | orchestrator | Saturday 31 May 2025 19:33:54 +0000 (0:00:12.422) 0:00:48.014 ********** 2025-05-31 19:34:54.734757 | orchestrator | Pausing for 60 seconds 2025-05-31 19:34:54.734881 | orchestrator | changed: [testbed-manager] 2025-05-31 19:34:54.734898 | orchestrator | 2025-05-31 19:34:54.735041 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-05-31 19:34:54.735089 | orchestrator | Saturday 31 May 2025 19:34:54 +0000 (0:01:00.068) 0:01:48.083 ********** 2025-05-31 19:34:54.788195 | orchestrator | ok: [testbed-manager] 2025-05-31 19:34:54.789033 | orchestrator | 2025-05-31 19:34:54.789592 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-05-31 19:34:54.790866 | orchestrator | Saturday 31 May 2025 19:34:54 +0000 (0:00:00.059) 0:01:48.143 ********** 2025-05-31 19:34:55.385335 | orchestrator | changed: [testbed-manager] 2025-05-31 19:34:55.387433 | orchestrator | 2025-05-31 19:34:55.387466 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 19:34:55.390442 | orchestrator | 2025-05-31 19:34:55 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-31 19:34:55.390468 | orchestrator | 2025-05-31 19:34:55 | INFO  | Please wait and do not abort execution. 2025-05-31 19:34:55.392326 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 19:34:55.395387 | orchestrator | 2025-05-31 19:34:55.396443 | orchestrator | 2025-05-31 19:34:55.396730 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-31 19:34:55.397949 | orchestrator | Saturday 31 May 2025 19:34:55 +0000 (0:00:00.596) 0:01:48.740 ********** 2025-05-31 19:34:55.398555 | orchestrator | =============================================================================== 2025-05-31 19:34:55.398878 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.07s 2025-05-31 19:34:55.399636 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 31.39s 2025-05-31 19:34:55.400514 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.42s 2025-05-31 19:34:55.401007 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.04s 2025-05-31 19:34:55.401995 | orchestrator | osism.services.squid : Create required directories ---------------------- 0.97s 2025-05-31 19:34:55.402233 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 0.88s 2025-05-31 19:34:55.402923 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.82s 2025-05-31 19:34:55.403326 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.60s 2025-05-31 19:34:55.404373 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.31s 2025-05-31 19:34:55.404880 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.07s 2025-05-31 19:34:55.405641 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.06s 2025-05-31 19:34:55.867228 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-05-31 19:34:55.867919 | orchestrator | ++ semver latest 9.0.0 2025-05-31 19:34:55.913226 | orchestrator | + [[ -1 -lt 0 ]] 2025-05-31 19:34:55.913286 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-05-31 19:34:55.913943 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-05-31 19:34:57.523884 | orchestrator | Registering Redlock._acquired_script 2025-05-31 19:34:57.523988 | orchestrator | Registering Redlock._extend_script 2025-05-31 19:34:57.524004 | orchestrator | Registering Redlock._release_script 2025-05-31 19:34:57.582236 | orchestrator | 2025-05-31 19:34:57 | INFO  | Task a6451f74-ccd4-403c-ac52-939a3e5313f4 (operator) was prepared for execution. 2025-05-31 19:34:57.582315 | orchestrator | 2025-05-31 19:34:57 | INFO  | It takes a moment until task a6451f74-ccd4-403c-ac52-939a3e5313f4 (operator) has been started and output is visible here. 2025-05-31 19:35:01.442214 | orchestrator | 2025-05-31 19:35:01.442854 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-05-31 19:35:01.443914 | orchestrator | 2025-05-31 19:35:01.444820 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-31 19:35:01.445548 | orchestrator | Saturday 31 May 2025 19:35:01 +0000 (0:00:00.147) 0:00:00.147 ********** 2025-05-31 19:35:04.609627 | orchestrator | ok: [testbed-node-2] 2025-05-31 19:35:04.609757 | orchestrator | ok: [testbed-node-1] 2025-05-31 19:35:04.609828 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:35:04.610061 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:35:04.610913 | orchestrator | ok: [testbed-node-0] 2025-05-31 19:35:04.613170 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:35:04.613470 | orchestrator | 2025-05-31 19:35:04.618483 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-05-31 19:35:04.619775 | orchestrator | Saturday 31 May 2025 19:35:04 +0000 (0:00:03.168) 0:00:03.315 ********** 2025-05-31 19:35:05.407921 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:35:05.409265 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:35:05.410601 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:35:05.410632 | orchestrator | ok: [testbed-node-0] 2025-05-31 19:35:05.411610 | orchestrator | ok: [testbed-node-2] 2025-05-31 19:35:05.412162 | orchestrator | ok: [testbed-node-1] 2025-05-31 19:35:05.413017 | orchestrator | 2025-05-31 19:35:05.414115 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-05-31 19:35:05.416043 | orchestrator | 2025-05-31 19:35:05.416499 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-05-31 19:35:05.417782 | orchestrator | Saturday 31 May 2025 19:35:05 +0000 (0:00:00.799) 0:00:04.114 ********** 2025-05-31 19:35:05.482197 | orchestrator | ok: [testbed-node-0] 2025-05-31 19:35:05.502613 | orchestrator | ok: [testbed-node-1] 2025-05-31 19:35:05.527814 | orchestrator | ok: [testbed-node-2] 2025-05-31 19:35:05.579851 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:35:05.580243 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:35:05.581518 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:35:05.581981 | orchestrator | 2025-05-31 19:35:05.582796 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-05-31 19:35:05.583410 | orchestrator | Saturday 31 May 2025 19:35:05 +0000 (0:00:00.172) 0:00:04.287 ********** 2025-05-31 19:35:05.650851 | orchestrator | ok: [testbed-node-0] 2025-05-31 19:35:05.670880 | orchestrator | ok: [testbed-node-1] 2025-05-31 19:35:05.692728 | orchestrator | ok: [testbed-node-2] 2025-05-31 19:35:05.755510 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:35:05.756724 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:35:05.757032 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:35:05.757509 | orchestrator | 2025-05-31 19:35:05.757894 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-05-31 19:35:05.758531 | orchestrator | Saturday 31 May 2025 19:35:05 +0000 (0:00:00.176) 0:00:04.463 ********** 2025-05-31 19:35:06.371844 | orchestrator | changed: [testbed-node-3] 2025-05-31 19:35:06.372021 | orchestrator | changed: [testbed-node-5] 2025-05-31 19:35:06.372055 | orchestrator | changed: [testbed-node-1] 2025-05-31 19:35:06.372145 | orchestrator | changed: [testbed-node-2] 2025-05-31 19:35:06.373039 | orchestrator | changed: [testbed-node-4] 2025-05-31 19:35:06.373677 | orchestrator | changed: [testbed-node-0] 2025-05-31 19:35:06.374231 | orchestrator | 2025-05-31 19:35:06.374933 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-05-31 19:35:06.375408 | orchestrator | Saturday 31 May 2025 19:35:06 +0000 (0:00:00.614) 0:00:05.078 ********** 2025-05-31 19:35:07.158298 | orchestrator | changed: [testbed-node-1] 2025-05-31 19:35:07.158459 | orchestrator | changed: [testbed-node-4] 2025-05-31 19:35:07.158549 | orchestrator | changed: [testbed-node-5] 2025-05-31 19:35:07.159637 | orchestrator | changed: [testbed-node-3] 2025-05-31 19:35:07.162105 | orchestrator | changed: [testbed-node-0] 2025-05-31 19:35:07.162834 | orchestrator | changed: [testbed-node-2] 2025-05-31 19:35:07.163263 | orchestrator | 2025-05-31 19:35:07.164311 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-05-31 19:35:07.164335 | orchestrator | Saturday 31 May 2025 19:35:07 +0000 (0:00:00.784) 0:00:05.862 ********** 2025-05-31 19:35:08.281483 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-05-31 19:35:08.282239 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-05-31 19:35:08.284587 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-05-31 19:35:08.285819 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-05-31 19:35:08.286636 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-05-31 19:35:08.288593 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-05-31 19:35:08.288744 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-05-31 19:35:08.289762 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-05-31 19:35:08.290518 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-05-31 19:35:08.291073 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-05-31 19:35:08.291579 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-05-31 19:35:08.292341 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-05-31 19:35:08.292944 | orchestrator | 2025-05-31 19:35:08.293745 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-05-31 19:35:08.293923 | orchestrator | Saturday 31 May 2025 19:35:08 +0000 (0:00:01.124) 0:00:06.987 ********** 2025-05-31 19:35:09.593659 | orchestrator | changed: [testbed-node-3] 2025-05-31 19:35:09.593790 | orchestrator | changed: [testbed-node-0] 2025-05-31 19:35:09.593807 | orchestrator | changed: [testbed-node-5] 2025-05-31 19:35:09.593818 | orchestrator | changed: [testbed-node-4] 2025-05-31 19:35:09.593898 | orchestrator | changed: [testbed-node-1] 2025-05-31 19:35:09.594103 | orchestrator | changed: [testbed-node-2] 2025-05-31 19:35:09.594527 | orchestrator | 2025-05-31 19:35:09.594868 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-05-31 19:35:09.595331 | orchestrator | Saturday 31 May 2025 19:35:09 +0000 (0:00:01.310) 0:00:08.297 ********** 2025-05-31 19:35:10.729522 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-05-31 19:35:10.729885 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-05-31 19:35:10.730448 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-05-31 19:35:10.962716 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-05-31 19:35:10.962910 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-05-31 19:35:10.965105 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-05-31 19:35:10.965143 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-05-31 19:35:10.965273 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-05-31 19:35:10.966155 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-05-31 19:35:10.966766 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-05-31 19:35:10.967395 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-05-31 19:35:10.968003 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-05-31 19:35:10.968201 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-05-31 19:35:10.968729 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-05-31 19:35:10.969029 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-05-31 19:35:10.969701 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-05-31 19:35:10.970101 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-05-31 19:35:10.970395 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-05-31 19:35:10.970950 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-05-31 19:35:10.971482 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-05-31 19:35:10.971714 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-05-31 19:35:10.972287 | orchestrator | 2025-05-31 19:35:10.972551 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-05-31 19:35:10.973105 | orchestrator | Saturday 31 May 2025 19:35:10 +0000 (0:00:01.371) 0:00:09.669 ********** 2025-05-31 19:35:11.541723 | orchestrator | changed: [testbed-node-5] 2025-05-31 19:35:11.542132 | orchestrator | changed: [testbed-node-4] 2025-05-31 19:35:11.542918 | orchestrator | changed: [testbed-node-3] 2025-05-31 19:35:11.543607 | orchestrator | changed: [testbed-node-1] 2025-05-31 19:35:11.544284 | orchestrator | changed: [testbed-node-0] 2025-05-31 19:35:11.544750 | orchestrator | changed: [testbed-node-2] 2025-05-31 19:35:11.545429 | orchestrator | 2025-05-31 19:35:11.546206 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-05-31 19:35:11.546593 | orchestrator | Saturday 31 May 2025 19:35:11 +0000 (0:00:00.580) 0:00:10.249 ********** 2025-05-31 19:35:11.656745 | orchestrator | skipping: [testbed-node-0] 2025-05-31 19:35:11.685707 | orchestrator | skipping: [testbed-node-1] 2025-05-31 19:35:11.745416 | orchestrator | skipping: [testbed-node-2] 2025-05-31 19:35:11.745972 | orchestrator | skipping: [testbed-node-3] 2025-05-31 19:35:11.747024 | orchestrator | skipping: [testbed-node-4] 2025-05-31 19:35:11.747507 | orchestrator | skipping: [testbed-node-5] 2025-05-31 19:35:11.747953 | orchestrator | 2025-05-31 19:35:11.748594 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-05-31 19:35:11.754413 | orchestrator | Saturday 31 May 2025 19:35:11 +0000 (0:00:00.203) 0:00:10.453 ********** 2025-05-31 19:35:12.404269 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-05-31 19:35:12.404679 | orchestrator | changed: [testbed-node-2] 2025-05-31 19:35:12.405561 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-31 19:35:12.405959 | orchestrator | changed: [testbed-node-3] 2025-05-31 19:35:12.406586 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-31 19:35:12.406940 | orchestrator | changed: [testbed-node-4] 2025-05-31 19:35:12.407407 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-05-31 19:35:12.407891 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-31 19:35:12.408197 | orchestrator | changed: [testbed-node-1] 2025-05-31 19:35:12.408650 | orchestrator | changed: [testbed-node-5] 2025-05-31 19:35:12.408977 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-31 19:35:12.409451 | orchestrator | changed: [testbed-node-0] 2025-05-31 19:35:12.409665 | orchestrator | 2025-05-31 19:35:12.410071 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-05-31 19:35:12.410535 | orchestrator | Saturday 31 May 2025 19:35:12 +0000 (0:00:00.658) 0:00:11.111 ********** 2025-05-31 19:35:12.470093 | orchestrator | skipping: [testbed-node-0] 2025-05-31 19:35:12.496648 | orchestrator | skipping: [testbed-node-1] 2025-05-31 19:35:12.521910 | orchestrator | skipping: [testbed-node-2] 2025-05-31 19:35:12.547887 | orchestrator | skipping: [testbed-node-3] 2025-05-31 19:35:12.548470 | orchestrator | skipping: [testbed-node-4] 2025-05-31 19:35:12.548955 | orchestrator | skipping: [testbed-node-5] 2025-05-31 19:35:12.549744 | orchestrator | 2025-05-31 19:35:12.550509 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-05-31 19:35:12.551671 | orchestrator | Saturday 31 May 2025 19:35:12 +0000 (0:00:00.144) 0:00:11.256 ********** 2025-05-31 19:35:12.599223 | orchestrator | skipping: [testbed-node-0] 2025-05-31 19:35:12.624705 | orchestrator | skipping: [testbed-node-1] 2025-05-31 19:35:12.647803 | orchestrator | skipping: [testbed-node-2] 2025-05-31 19:35:12.673834 | orchestrator | skipping: [testbed-node-3] 2025-05-31 19:35:12.707264 | orchestrator | skipping: [testbed-node-4] 2025-05-31 19:35:12.707813 | orchestrator | skipping: [testbed-node-5] 2025-05-31 19:35:12.708583 | orchestrator | 2025-05-31 19:35:12.709024 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-05-31 19:35:12.709670 | orchestrator | Saturday 31 May 2025 19:35:12 +0000 (0:00:00.159) 0:00:11.416 ********** 2025-05-31 19:35:12.762180 | orchestrator | skipping: [testbed-node-0] 2025-05-31 19:35:12.785702 | orchestrator | skipping: [testbed-node-1] 2025-05-31 19:35:12.808335 | orchestrator | skipping: [testbed-node-2] 2025-05-31 19:35:12.833469 | orchestrator | skipping: [testbed-node-3] 2025-05-31 19:35:12.864791 | orchestrator | skipping: [testbed-node-4] 2025-05-31 19:35:12.865465 | orchestrator | skipping: [testbed-node-5] 2025-05-31 19:35:12.865784 | orchestrator | 2025-05-31 19:35:12.866325 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-05-31 19:35:12.866940 | orchestrator | Saturday 31 May 2025 19:35:12 +0000 (0:00:00.157) 0:00:11.573 ********** 2025-05-31 19:35:13.550582 | orchestrator | changed: [testbed-node-0] 2025-05-31 19:35:13.550731 | orchestrator | changed: [testbed-node-3] 2025-05-31 19:35:13.550842 | orchestrator | changed: [testbed-node-2] 2025-05-31 19:35:13.550908 | orchestrator | changed: [testbed-node-4] 2025-05-31 19:35:13.551471 | orchestrator | changed: [testbed-node-1] 2025-05-31 19:35:13.552118 | orchestrator | changed: [testbed-node-5] 2025-05-31 19:35:13.552735 | orchestrator | 2025-05-31 19:35:13.553391 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-05-31 19:35:13.554012 | orchestrator | Saturday 31 May 2025 19:35:13 +0000 (0:00:00.671) 0:00:12.245 ********** 2025-05-31 19:35:13.647230 | orchestrator | skipping: [testbed-node-0] 2025-05-31 19:35:13.664679 | orchestrator | skipping: [testbed-node-1] 2025-05-31 19:35:13.763628 | orchestrator | skipping: [testbed-node-2] 2025-05-31 19:35:13.765070 | orchestrator | skipping: [testbed-node-3] 2025-05-31 19:35:13.766628 | orchestrator | skipping: [testbed-node-4] 2025-05-31 19:35:13.768259 | orchestrator | skipping: [testbed-node-5] 2025-05-31 19:35:13.769529 | orchestrator | 2025-05-31 19:35:13.773748 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 19:35:13.773797 | orchestrator | 2025-05-31 19:35:13 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-31 19:35:13.773812 | orchestrator | 2025-05-31 19:35:13 | INFO  | Please wait and do not abort execution. 2025-05-31 19:35:13.775257 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-31 19:35:13.780182 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-31 19:35:13.780533 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-31 19:35:13.781174 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-31 19:35:13.781738 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-31 19:35:13.782638 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-31 19:35:13.782828 | orchestrator | 2025-05-31 19:35:13.783163 | orchestrator | 2025-05-31 19:35:13.783825 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-31 19:35:13.784442 | orchestrator | Saturday 31 May 2025 19:35:13 +0000 (0:00:00.225) 0:00:12.470 ********** 2025-05-31 19:35:13.785078 | orchestrator | =============================================================================== 2025-05-31 19:35:13.785494 | orchestrator | Gathering Facts --------------------------------------------------------- 3.17s 2025-05-31 19:35:13.785996 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.37s 2025-05-31 19:35:13.786570 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.31s 2025-05-31 19:35:13.786827 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.12s 2025-05-31 19:35:13.787437 | orchestrator | Do not require tty for all users ---------------------------------------- 0.80s 2025-05-31 19:35:13.787863 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.78s 2025-05-31 19:35:13.788706 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.67s 2025-05-31 19:35:13.788984 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.66s 2025-05-31 19:35:13.789493 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.61s 2025-05-31 19:35:13.789706 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.58s 2025-05-31 19:35:13.790253 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.23s 2025-05-31 19:35:13.790450 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.20s 2025-05-31 19:35:13.791764 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.18s 2025-05-31 19:35:13.791787 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.17s 2025-05-31 19:35:13.791799 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.16s 2025-05-31 19:35:13.791810 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.16s 2025-05-31 19:35:13.791821 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.14s 2025-05-31 19:35:14.681545 | orchestrator | + osism apply --environment custom facts 2025-05-31 19:35:16.401086 | orchestrator | 2025-05-31 19:35:16 | INFO  | Trying to run play facts in environment custom 2025-05-31 19:35:16.405827 | orchestrator | Registering Redlock._acquired_script 2025-05-31 19:35:16.405936 | orchestrator | Registering Redlock._extend_script 2025-05-31 19:35:16.405952 | orchestrator | Registering Redlock._release_script 2025-05-31 19:35:16.465527 | orchestrator | 2025-05-31 19:35:16 | INFO  | Task c76d35c4-8d21-40d4-aaac-2f1bb309d1b9 (facts) was prepared for execution. 2025-05-31 19:35:16.465589 | orchestrator | 2025-05-31 19:35:16 | INFO  | It takes a moment until task c76d35c4-8d21-40d4-aaac-2f1bb309d1b9 (facts) has been started and output is visible here. 2025-05-31 19:35:20.338622 | orchestrator | 2025-05-31 19:35:20.339410 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-05-31 19:35:20.342444 | orchestrator | 2025-05-31 19:35:20.342476 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-05-31 19:35:20.342511 | orchestrator | Saturday 31 May 2025 19:35:20 +0000 (0:00:00.100) 0:00:00.100 ********** 2025-05-31 19:35:21.721153 | orchestrator | ok: [testbed-manager] 2025-05-31 19:35:21.723188 | orchestrator | changed: [testbed-node-3] 2025-05-31 19:35:21.724533 | orchestrator | changed: [testbed-node-4] 2025-05-31 19:35:21.725590 | orchestrator | changed: [testbed-node-5] 2025-05-31 19:35:21.726273 | orchestrator | changed: [testbed-node-1] 2025-05-31 19:35:21.726447 | orchestrator | changed: [testbed-node-0] 2025-05-31 19:35:21.727160 | orchestrator | changed: [testbed-node-2] 2025-05-31 19:35:21.727773 | orchestrator | 2025-05-31 19:35:21.728657 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-05-31 19:35:21.729560 | orchestrator | Saturday 31 May 2025 19:35:21 +0000 (0:00:01.384) 0:00:01.485 ********** 2025-05-31 19:35:22.901577 | orchestrator | ok: [testbed-manager] 2025-05-31 19:35:22.904078 | orchestrator | changed: [testbed-node-4] 2025-05-31 19:35:22.905230 | orchestrator | changed: [testbed-node-1] 2025-05-31 19:35:22.906874 | orchestrator | changed: [testbed-node-0] 2025-05-31 19:35:22.908686 | orchestrator | changed: [testbed-node-3] 2025-05-31 19:35:22.908717 | orchestrator | changed: [testbed-node-5] 2025-05-31 19:35:22.909771 | orchestrator | changed: [testbed-node-2] 2025-05-31 19:35:22.909862 | orchestrator | 2025-05-31 19:35:22.910761 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-05-31 19:35:22.911388 | orchestrator | 2025-05-31 19:35:22.911994 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-05-31 19:35:22.912424 | orchestrator | Saturday 31 May 2025 19:35:22 +0000 (0:00:01.181) 0:00:02.667 ********** 2025-05-31 19:35:23.033096 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:35:23.033261 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:35:23.033586 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:35:23.034137 | orchestrator | 2025-05-31 19:35:23.036079 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-05-31 19:35:23.036430 | orchestrator | Saturday 31 May 2025 19:35:23 +0000 (0:00:00.131) 0:00:02.798 ********** 2025-05-31 19:35:23.237169 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:35:23.237350 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:35:23.241605 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:35:23.241632 | orchestrator | 2025-05-31 19:35:23.242266 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-05-31 19:35:23.242806 | orchestrator | Saturday 31 May 2025 19:35:23 +0000 (0:00:00.205) 0:00:03.003 ********** 2025-05-31 19:35:23.438298 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:35:23.438743 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:35:23.439274 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:35:23.440704 | orchestrator | 2025-05-31 19:35:23.440727 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-05-31 19:35:23.440938 | orchestrator | Saturday 31 May 2025 19:35:23 +0000 (0:00:00.201) 0:00:03.205 ********** 2025-05-31 19:35:23.581941 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 19:35:23.582782 | orchestrator | 2025-05-31 19:35:23.583673 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-05-31 19:35:23.584712 | orchestrator | Saturday 31 May 2025 19:35:23 +0000 (0:00:00.143) 0:00:03.348 ********** 2025-05-31 19:35:24.008740 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:35:24.008851 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:35:24.009547 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:35:24.011571 | orchestrator | 2025-05-31 19:35:24.011714 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-05-31 19:35:24.014197 | orchestrator | Saturday 31 May 2025 19:35:23 +0000 (0:00:00.426) 0:00:03.775 ********** 2025-05-31 19:35:24.147443 | orchestrator | skipping: [testbed-node-3] 2025-05-31 19:35:24.147606 | orchestrator | skipping: [testbed-node-4] 2025-05-31 19:35:24.148188 | orchestrator | skipping: [testbed-node-5] 2025-05-31 19:35:24.148986 | orchestrator | 2025-05-31 19:35:24.149835 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-05-31 19:35:24.152889 | orchestrator | Saturday 31 May 2025 19:35:24 +0000 (0:00:00.138) 0:00:03.914 ********** 2025-05-31 19:35:25.179582 | orchestrator | changed: [testbed-node-3] 2025-05-31 19:35:25.180943 | orchestrator | changed: [testbed-node-4] 2025-05-31 19:35:25.181682 | orchestrator | changed: [testbed-node-5] 2025-05-31 19:35:25.181992 | orchestrator | 2025-05-31 19:35:25.182479 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-05-31 19:35:25.183602 | orchestrator | Saturday 31 May 2025 19:35:25 +0000 (0:00:01.030) 0:00:04.945 ********** 2025-05-31 19:35:25.650810 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:35:25.651115 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:35:25.653298 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:35:25.654107 | orchestrator | 2025-05-31 19:35:25.655008 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-05-31 19:35:25.655862 | orchestrator | Saturday 31 May 2025 19:35:25 +0000 (0:00:00.471) 0:00:05.416 ********** 2025-05-31 19:35:26.671199 | orchestrator | changed: [testbed-node-3] 2025-05-31 19:35:26.671310 | orchestrator | changed: [testbed-node-4] 2025-05-31 19:35:26.671574 | orchestrator | changed: [testbed-node-5] 2025-05-31 19:35:26.672314 | orchestrator | 2025-05-31 19:35:26.672960 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-05-31 19:35:26.673448 | orchestrator | Saturday 31 May 2025 19:35:26 +0000 (0:00:01.019) 0:00:06.435 ********** 2025-05-31 19:35:40.484191 | orchestrator | changed: [testbed-node-4] 2025-05-31 19:35:40.484325 | orchestrator | changed: [testbed-node-3] 2025-05-31 19:35:40.484341 | orchestrator | changed: [testbed-node-5] 2025-05-31 19:35:40.484352 | orchestrator | 2025-05-31 19:35:40.484365 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-05-31 19:35:40.484420 | orchestrator | Saturday 31 May 2025 19:35:40 +0000 (0:00:13.807) 0:00:20.243 ********** 2025-05-31 19:35:40.581936 | orchestrator | skipping: [testbed-node-3] 2025-05-31 19:35:40.582866 | orchestrator | skipping: [testbed-node-4] 2025-05-31 19:35:40.585727 | orchestrator | skipping: [testbed-node-5] 2025-05-31 19:35:40.585765 | orchestrator | 2025-05-31 19:35:40.585778 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-05-31 19:35:40.586888 | orchestrator | Saturday 31 May 2025 19:35:40 +0000 (0:00:00.105) 0:00:20.348 ********** 2025-05-31 19:35:47.939003 | orchestrator | changed: [testbed-node-4] 2025-05-31 19:35:47.939271 | orchestrator | changed: [testbed-node-3] 2025-05-31 19:35:47.940150 | orchestrator | changed: [testbed-node-5] 2025-05-31 19:35:47.940676 | orchestrator | 2025-05-31 19:35:47.941584 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-05-31 19:35:47.941823 | orchestrator | Saturday 31 May 2025 19:35:47 +0000 (0:00:07.357) 0:00:27.705 ********** 2025-05-31 19:35:48.370992 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:35:48.372756 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:35:48.373143 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:35:48.373832 | orchestrator | 2025-05-31 19:35:48.374477 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-05-31 19:35:48.374882 | orchestrator | Saturday 31 May 2025 19:35:48 +0000 (0:00:00.431) 0:00:28.137 ********** 2025-05-31 19:35:51.830456 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-05-31 19:35:51.830551 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-05-31 19:35:51.831876 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-05-31 19:35:51.835031 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-05-31 19:35:51.835131 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-05-31 19:35:51.835152 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-05-31 19:35:51.835776 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-05-31 19:35:51.836795 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-05-31 19:35:51.837774 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-05-31 19:35:51.838735 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-05-31 19:35:51.839092 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-05-31 19:35:51.839840 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-05-31 19:35:51.840491 | orchestrator | 2025-05-31 19:35:51.841586 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-05-31 19:35:51.841765 | orchestrator | Saturday 31 May 2025 19:35:51 +0000 (0:00:03.457) 0:00:31.594 ********** 2025-05-31 19:35:53.024479 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:35:53.024677 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:35:53.025598 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:35:53.026484 | orchestrator | 2025-05-31 19:35:53.027106 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-31 19:35:53.028628 | orchestrator | 2025-05-31 19:35:53.028654 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-31 19:35:53.029684 | orchestrator | Saturday 31 May 2025 19:35:53 +0000 (0:00:01.195) 0:00:32.790 ********** 2025-05-31 19:35:56.736131 | orchestrator | ok: [testbed-node-0] 2025-05-31 19:35:56.736249 | orchestrator | ok: [testbed-node-1] 2025-05-31 19:35:56.736583 | orchestrator | ok: [testbed-node-2] 2025-05-31 19:35:56.737645 | orchestrator | ok: [testbed-manager] 2025-05-31 19:35:56.737802 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:35:56.738809 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:35:56.738998 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:35:56.742100 | orchestrator | 2025-05-31 19:35:56.742951 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 19:35:56.743541 | orchestrator | 2025-05-31 19:35:56 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-31 19:35:56.743781 | orchestrator | 2025-05-31 19:35:56 | INFO  | Please wait and do not abort execution. 2025-05-31 19:35:56.744990 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 19:35:56.745228 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 19:35:56.745642 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 19:35:56.745982 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 19:35:56.746291 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-31 19:35:56.746660 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-31 19:35:56.746970 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-31 19:35:56.747379 | orchestrator | 2025-05-31 19:35:56.748459 | orchestrator | 2025-05-31 19:35:56.748770 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-31 19:35:56.749554 | orchestrator | Saturday 31 May 2025 19:35:56 +0000 (0:00:03.711) 0:00:36.501 ********** 2025-05-31 19:35:56.750081 | orchestrator | =============================================================================== 2025-05-31 19:35:56.750808 | orchestrator | osism.commons.repository : Update package cache ------------------------ 13.81s 2025-05-31 19:35:56.751533 | orchestrator | Install required packages (Debian) -------------------------------------- 7.36s 2025-05-31 19:35:56.751913 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.71s 2025-05-31 19:35:56.752618 | orchestrator | Copy fact files --------------------------------------------------------- 3.46s 2025-05-31 19:35:56.753088 | orchestrator | Create custom facts directory ------------------------------------------- 1.38s 2025-05-31 19:35:56.753486 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.20s 2025-05-31 19:35:56.753676 | orchestrator | Copy fact file ---------------------------------------------------------- 1.18s 2025-05-31 19:35:56.754097 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.03s 2025-05-31 19:35:56.754485 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.02s 2025-05-31 19:35:56.754729 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.47s 2025-05-31 19:35:56.755004 | orchestrator | Create custom facts directory ------------------------------------------- 0.43s 2025-05-31 19:35:56.755381 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.43s 2025-05-31 19:35:56.755910 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.21s 2025-05-31 19:35:56.756378 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.20s 2025-05-31 19:35:56.756780 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.14s 2025-05-31 19:35:56.757218 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.14s 2025-05-31 19:35:56.757736 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.13s 2025-05-31 19:35:56.758459 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.11s 2025-05-31 19:35:57.237197 | orchestrator | + osism apply bootstrap 2025-05-31 19:35:58.859938 | orchestrator | Registering Redlock._acquired_script 2025-05-31 19:35:58.860034 | orchestrator | Registering Redlock._extend_script 2025-05-31 19:35:58.860050 | orchestrator | Registering Redlock._release_script 2025-05-31 19:35:58.916970 | orchestrator | 2025-05-31 19:35:58 | INFO  | Task 85878b43-3c6d-42ad-8f52-a685fcc25980 (bootstrap) was prepared for execution. 2025-05-31 19:35:58.917077 | orchestrator | 2025-05-31 19:35:58 | INFO  | It takes a moment until task 85878b43-3c6d-42ad-8f52-a685fcc25980 (bootstrap) has been started and output is visible here. 2025-05-31 19:36:02.925591 | orchestrator | 2025-05-31 19:36:02.930255 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-05-31 19:36:02.930972 | orchestrator | 2025-05-31 19:36:02.931645 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-05-31 19:36:02.933007 | orchestrator | Saturday 31 May 2025 19:36:02 +0000 (0:00:00.161) 0:00:00.161 ********** 2025-05-31 19:36:03.004650 | orchestrator | ok: [testbed-manager] 2025-05-31 19:36:03.026935 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:36:03.055805 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:36:03.083088 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:36:03.159664 | orchestrator | ok: [testbed-node-0] 2025-05-31 19:36:03.160114 | orchestrator | ok: [testbed-node-1] 2025-05-31 19:36:03.160765 | orchestrator | ok: [testbed-node-2] 2025-05-31 19:36:03.161770 | orchestrator | 2025-05-31 19:36:03.162756 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-31 19:36:03.165613 | orchestrator | 2025-05-31 19:36:03.165621 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-31 19:36:03.165626 | orchestrator | Saturday 31 May 2025 19:36:03 +0000 (0:00:00.238) 0:00:00.399 ********** 2025-05-31 19:36:06.666690 | orchestrator | ok: [testbed-node-0] 2025-05-31 19:36:06.667280 | orchestrator | ok: [testbed-node-2] 2025-05-31 19:36:06.668735 | orchestrator | ok: [testbed-node-1] 2025-05-31 19:36:06.669862 | orchestrator | ok: [testbed-manager] 2025-05-31 19:36:06.670190 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:36:06.671049 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:36:06.671715 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:36:06.672236 | orchestrator | 2025-05-31 19:36:06.672990 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-05-31 19:36:06.673818 | orchestrator | 2025-05-31 19:36:06.674141 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-31 19:36:06.674635 | orchestrator | Saturday 31 May 2025 19:36:06 +0000 (0:00:03.506) 0:00:03.906 ********** 2025-05-31 19:36:06.759923 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-05-31 19:36:06.760015 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-05-31 19:36:06.760028 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-05-31 19:36:06.795752 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-05-31 19:36:06.795861 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-31 19:36:06.796193 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-05-31 19:36:06.796531 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-31 19:36:06.796816 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-05-31 19:36:06.797149 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-31 19:36:06.841111 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-31 19:36:06.841216 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-05-31 19:36:06.841325 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-05-31 19:36:06.841699 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-31 19:36:06.842084 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-05-31 19:36:06.842298 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-31 19:36:07.100359 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-31 19:36:07.100879 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-05-31 19:36:07.101808 | orchestrator | skipping: [testbed-node-3] 2025-05-31 19:36:07.103868 | orchestrator | skipping: [testbed-manager] 2025-05-31 19:36:07.104226 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-05-31 19:36:07.104921 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-31 19:36:07.105586 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-31 19:36:07.106258 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-05-31 19:36:07.107171 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-31 19:36:07.108015 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-31 19:36:07.108846 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-31 19:36:07.108937 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-31 19:36:07.109669 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-05-31 19:36:07.110498 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-31 19:36:07.110912 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-31 19:36:07.111332 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-31 19:36:07.112141 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-31 19:36:07.112578 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-31 19:36:07.113041 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-31 19:36:07.113340 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-31 19:36:07.114072 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-31 19:36:07.114492 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-05-31 19:36:07.116237 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-31 19:36:07.116581 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-31 19:36:07.117129 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-31 19:36:07.117611 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-05-31 19:36:07.118487 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-31 19:36:07.118650 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-05-31 19:36:07.119270 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-05-31 19:36:07.119632 | orchestrator | skipping: [testbed-node-1] 2025-05-31 19:36:07.120576 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-31 19:36:07.120972 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-05-31 19:36:07.121454 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-31 19:36:07.121880 | orchestrator | skipping: [testbed-node-5] 2025-05-31 19:36:07.122834 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-31 19:36:07.122941 | orchestrator | skipping: [testbed-node-4] 2025-05-31 19:36:07.123320 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-05-31 19:36:07.123659 | orchestrator | skipping: [testbed-node-2] 2025-05-31 19:36:07.124015 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-31 19:36:07.124264 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-31 19:36:07.124732 | orchestrator | skipping: [testbed-node-0] 2025-05-31 19:36:07.124985 | orchestrator | 2025-05-31 19:36:07.126556 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-05-31 19:36:07.126841 | orchestrator | 2025-05-31 19:36:07.127528 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-05-31 19:36:07.127703 | orchestrator | Saturday 31 May 2025 19:36:07 +0000 (0:00:00.431) 0:00:04.337 ********** 2025-05-31 19:36:08.336580 | orchestrator | ok: [testbed-manager] 2025-05-31 19:36:08.338623 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:36:08.338639 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:36:08.339013 | orchestrator | ok: [testbed-node-2] 2025-05-31 19:36:08.340003 | orchestrator | ok: [testbed-node-1] 2025-05-31 19:36:08.340863 | orchestrator | ok: [testbed-node-0] 2025-05-31 19:36:08.341859 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:36:08.342760 | orchestrator | 2025-05-31 19:36:08.343572 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-05-31 19:36:08.344166 | orchestrator | Saturday 31 May 2025 19:36:08 +0000 (0:00:01.236) 0:00:05.574 ********** 2025-05-31 19:36:09.554545 | orchestrator | ok: [testbed-manager] 2025-05-31 19:36:09.555068 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:36:09.557865 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:36:09.558852 | orchestrator | ok: [testbed-node-1] 2025-05-31 19:36:09.559604 | orchestrator | ok: [testbed-node-2] 2025-05-31 19:36:09.560295 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:36:09.560848 | orchestrator | ok: [testbed-node-0] 2025-05-31 19:36:09.561720 | orchestrator | 2025-05-31 19:36:09.562247 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-05-31 19:36:09.562788 | orchestrator | Saturday 31 May 2025 19:36:09 +0000 (0:00:01.215) 0:00:06.789 ********** 2025-05-31 19:36:09.866800 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 19:36:09.867135 | orchestrator | 2025-05-31 19:36:09.871491 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-05-31 19:36:09.872812 | orchestrator | Saturday 31 May 2025 19:36:09 +0000 (0:00:00.314) 0:00:07.104 ********** 2025-05-31 19:36:11.843070 | orchestrator | changed: [testbed-manager] 2025-05-31 19:36:11.843946 | orchestrator | changed: [testbed-node-3] 2025-05-31 19:36:11.847105 | orchestrator | changed: [testbed-node-5] 2025-05-31 19:36:11.848448 | orchestrator | changed: [testbed-node-4] 2025-05-31 19:36:11.849871 | orchestrator | changed: [testbed-node-0] 2025-05-31 19:36:11.850750 | orchestrator | changed: [testbed-node-1] 2025-05-31 19:36:11.851732 | orchestrator | changed: [testbed-node-2] 2025-05-31 19:36:11.852527 | orchestrator | 2025-05-31 19:36:11.853023 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-05-31 19:36:11.853771 | orchestrator | Saturday 31 May 2025 19:36:11 +0000 (0:00:01.974) 0:00:09.078 ********** 2025-05-31 19:36:11.910928 | orchestrator | skipping: [testbed-manager] 2025-05-31 19:36:12.090997 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 19:36:12.092081 | orchestrator | 2025-05-31 19:36:12.092230 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-05-31 19:36:12.092618 | orchestrator | Saturday 31 May 2025 19:36:12 +0000 (0:00:00.251) 0:00:09.330 ********** 2025-05-31 19:36:13.136086 | orchestrator | changed: [testbed-node-3] 2025-05-31 19:36:13.136373 | orchestrator | changed: [testbed-node-5] 2025-05-31 19:36:13.137121 | orchestrator | changed: [testbed-node-4] 2025-05-31 19:36:13.138581 | orchestrator | changed: [testbed-node-1] 2025-05-31 19:36:13.138814 | orchestrator | changed: [testbed-node-0] 2025-05-31 19:36:13.139617 | orchestrator | changed: [testbed-node-2] 2025-05-31 19:36:13.140023 | orchestrator | 2025-05-31 19:36:13.141174 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-05-31 19:36:13.141655 | orchestrator | Saturday 31 May 2025 19:36:13 +0000 (0:00:01.043) 0:00:10.374 ********** 2025-05-31 19:36:13.213867 | orchestrator | skipping: [testbed-manager] 2025-05-31 19:36:13.686313 | orchestrator | changed: [testbed-node-3] 2025-05-31 19:36:13.686784 | orchestrator | changed: [testbed-node-1] 2025-05-31 19:36:13.688435 | orchestrator | changed: [testbed-node-4] 2025-05-31 19:36:13.689529 | orchestrator | changed: [testbed-node-5] 2025-05-31 19:36:13.690659 | orchestrator | changed: [testbed-node-2] 2025-05-31 19:36:13.691851 | orchestrator | changed: [testbed-node-0] 2025-05-31 19:36:13.692822 | orchestrator | 2025-05-31 19:36:13.693686 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-05-31 19:36:13.694674 | orchestrator | Saturday 31 May 2025 19:36:13 +0000 (0:00:00.550) 0:00:10.925 ********** 2025-05-31 19:36:13.775837 | orchestrator | skipping: [testbed-node-3] 2025-05-31 19:36:13.803032 | orchestrator | skipping: [testbed-node-4] 2025-05-31 19:36:13.828774 | orchestrator | skipping: [testbed-node-5] 2025-05-31 19:36:14.082264 | orchestrator | skipping: [testbed-node-0] 2025-05-31 19:36:14.083013 | orchestrator | skipping: [testbed-node-1] 2025-05-31 19:36:14.083715 | orchestrator | skipping: [testbed-node-2] 2025-05-31 19:36:14.084470 | orchestrator | ok: [testbed-manager] 2025-05-31 19:36:14.086108 | orchestrator | 2025-05-31 19:36:14.086149 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-05-31 19:36:14.086828 | orchestrator | Saturday 31 May 2025 19:36:14 +0000 (0:00:00.394) 0:00:11.319 ********** 2025-05-31 19:36:14.160196 | orchestrator | skipping: [testbed-manager] 2025-05-31 19:36:14.188819 | orchestrator | skipping: [testbed-node-3] 2025-05-31 19:36:14.215742 | orchestrator | skipping: [testbed-node-4] 2025-05-31 19:36:14.234689 | orchestrator | skipping: [testbed-node-5] 2025-05-31 19:36:14.298815 | orchestrator | skipping: [testbed-node-0] 2025-05-31 19:36:14.298915 | orchestrator | skipping: [testbed-node-1] 2025-05-31 19:36:14.301748 | orchestrator | skipping: [testbed-node-2] 2025-05-31 19:36:14.301770 | orchestrator | 2025-05-31 19:36:14.301783 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-05-31 19:36:14.301886 | orchestrator | Saturday 31 May 2025 19:36:14 +0000 (0:00:00.217) 0:00:11.537 ********** 2025-05-31 19:36:14.557879 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 19:36:14.558102 | orchestrator | 2025-05-31 19:36:14.558524 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-05-31 19:36:14.558782 | orchestrator | Saturday 31 May 2025 19:36:14 +0000 (0:00:00.259) 0:00:11.797 ********** 2025-05-31 19:36:14.839856 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 19:36:14.840054 | orchestrator | 2025-05-31 19:36:14.841315 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-05-31 19:36:14.841910 | orchestrator | Saturday 31 May 2025 19:36:14 +0000 (0:00:00.281) 0:00:12.078 ********** 2025-05-31 19:36:16.912609 | orchestrator | ok: [testbed-manager] 2025-05-31 19:36:16.913433 | orchestrator | ok: [testbed-node-2] 2025-05-31 19:36:16.914444 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:36:16.914518 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:36:16.914532 | orchestrator | ok: [testbed-node-0] 2025-05-31 19:36:16.915429 | orchestrator | ok: [testbed-node-1] 2025-05-31 19:36:16.916625 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:36:16.917198 | orchestrator | 2025-05-31 19:36:16.917736 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-05-31 19:36:16.918155 | orchestrator | Saturday 31 May 2025 19:36:16 +0000 (0:00:02.071) 0:00:14.150 ********** 2025-05-31 19:36:16.985616 | orchestrator | skipping: [testbed-manager] 2025-05-31 19:36:17.009300 | orchestrator | skipping: [testbed-node-3] 2025-05-31 19:36:17.033401 | orchestrator | skipping: [testbed-node-4] 2025-05-31 19:36:17.059375 | orchestrator | skipping: [testbed-node-5] 2025-05-31 19:36:17.124032 | orchestrator | skipping: [testbed-node-0] 2025-05-31 19:36:17.124741 | orchestrator | skipping: [testbed-node-1] 2025-05-31 19:36:17.127708 | orchestrator | skipping: [testbed-node-2] 2025-05-31 19:36:17.127738 | orchestrator | 2025-05-31 19:36:17.127751 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-05-31 19:36:17.127765 | orchestrator | Saturday 31 May 2025 19:36:17 +0000 (0:00:00.212) 0:00:14.362 ********** 2025-05-31 19:36:17.640901 | orchestrator | ok: [testbed-manager] 2025-05-31 19:36:17.641458 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:36:17.642402 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:36:17.643343 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:36:17.643708 | orchestrator | ok: [testbed-node-0] 2025-05-31 19:36:17.644645 | orchestrator | ok: [testbed-node-1] 2025-05-31 19:36:17.645071 | orchestrator | ok: [testbed-node-2] 2025-05-31 19:36:17.645596 | orchestrator | 2025-05-31 19:36:17.646272 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-05-31 19:36:17.646979 | orchestrator | Saturday 31 May 2025 19:36:17 +0000 (0:00:00.516) 0:00:14.879 ********** 2025-05-31 19:36:17.717836 | orchestrator | skipping: [testbed-manager] 2025-05-31 19:36:17.745678 | orchestrator | skipping: [testbed-node-3] 2025-05-31 19:36:17.765165 | orchestrator | skipping: [testbed-node-4] 2025-05-31 19:36:17.789817 | orchestrator | skipping: [testbed-node-5] 2025-05-31 19:36:17.862985 | orchestrator | skipping: [testbed-node-0] 2025-05-31 19:36:17.863071 | orchestrator | skipping: [testbed-node-1] 2025-05-31 19:36:17.863175 | orchestrator | skipping: [testbed-node-2] 2025-05-31 19:36:17.863648 | orchestrator | 2025-05-31 19:36:17.863818 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-05-31 19:36:17.864042 | orchestrator | Saturday 31 May 2025 19:36:17 +0000 (0:00:00.222) 0:00:15.102 ********** 2025-05-31 19:36:18.401865 | orchestrator | ok: [testbed-manager] 2025-05-31 19:36:18.402813 | orchestrator | changed: [testbed-node-3] 2025-05-31 19:36:18.403037 | orchestrator | changed: [testbed-node-4] 2025-05-31 19:36:18.404159 | orchestrator | changed: [testbed-node-0] 2025-05-31 19:36:18.405355 | orchestrator | changed: [testbed-node-5] 2025-05-31 19:36:18.405968 | orchestrator | changed: [testbed-node-1] 2025-05-31 19:36:18.406791 | orchestrator | changed: [testbed-node-2] 2025-05-31 19:36:18.407507 | orchestrator | 2025-05-31 19:36:18.408180 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-05-31 19:36:18.409068 | orchestrator | Saturday 31 May 2025 19:36:18 +0000 (0:00:00.534) 0:00:15.636 ********** 2025-05-31 19:36:19.501364 | orchestrator | ok: [testbed-manager] 2025-05-31 19:36:19.501892 | orchestrator | changed: [testbed-node-3] 2025-05-31 19:36:19.502451 | orchestrator | changed: [testbed-node-4] 2025-05-31 19:36:19.502866 | orchestrator | changed: [testbed-node-5] 2025-05-31 19:36:19.503552 | orchestrator | changed: [testbed-node-0] 2025-05-31 19:36:19.504269 | orchestrator | changed: [testbed-node-2] 2025-05-31 19:36:19.505003 | orchestrator | changed: [testbed-node-1] 2025-05-31 19:36:19.505778 | orchestrator | 2025-05-31 19:36:19.506790 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-05-31 19:36:19.507447 | orchestrator | Saturday 31 May 2025 19:36:19 +0000 (0:00:01.099) 0:00:16.735 ********** 2025-05-31 19:36:20.605942 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:36:20.606062 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:36:20.606116 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:36:20.606307 | orchestrator | ok: [testbed-node-0] 2025-05-31 19:36:20.607039 | orchestrator | ok: [testbed-manager] 2025-05-31 19:36:20.607191 | orchestrator | ok: [testbed-node-1] 2025-05-31 19:36:20.607531 | orchestrator | ok: [testbed-node-2] 2025-05-31 19:36:20.608119 | orchestrator | 2025-05-31 19:36:20.608358 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-05-31 19:36:20.609287 | orchestrator | Saturday 31 May 2025 19:36:20 +0000 (0:00:01.108) 0:00:17.843 ********** 2025-05-31 19:36:20.980250 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 19:36:20.980405 | orchestrator | 2025-05-31 19:36:20.981752 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-05-31 19:36:20.982721 | orchestrator | Saturday 31 May 2025 19:36:20 +0000 (0:00:00.374) 0:00:18.218 ********** 2025-05-31 19:36:21.060347 | orchestrator | skipping: [testbed-manager] 2025-05-31 19:36:22.217242 | orchestrator | changed: [testbed-node-4] 2025-05-31 19:36:22.218348 | orchestrator | changed: [testbed-node-5] 2025-05-31 19:36:22.219128 | orchestrator | changed: [testbed-node-1] 2025-05-31 19:36:22.221716 | orchestrator | changed: [testbed-node-3] 2025-05-31 19:36:22.221747 | orchestrator | changed: [testbed-node-0] 2025-05-31 19:36:22.222218 | orchestrator | changed: [testbed-node-2] 2025-05-31 19:36:22.223044 | orchestrator | 2025-05-31 19:36:22.224867 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-05-31 19:36:22.226946 | orchestrator | Saturday 31 May 2025 19:36:22 +0000 (0:00:01.235) 0:00:19.453 ********** 2025-05-31 19:36:22.303547 | orchestrator | ok: [testbed-manager] 2025-05-31 19:36:22.335983 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:36:22.353982 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:36:22.444485 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:36:22.445962 | orchestrator | ok: [testbed-node-0] 2025-05-31 19:36:22.447697 | orchestrator | ok: [testbed-node-1] 2025-05-31 19:36:22.449332 | orchestrator | ok: [testbed-node-2] 2025-05-31 19:36:22.450637 | orchestrator | 2025-05-31 19:36:22.452055 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-05-31 19:36:22.453145 | orchestrator | Saturday 31 May 2025 19:36:22 +0000 (0:00:00.229) 0:00:19.683 ********** 2025-05-31 19:36:22.514283 | orchestrator | ok: [testbed-manager] 2025-05-31 19:36:22.558654 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:36:22.588817 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:36:22.654265 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:36:22.654354 | orchestrator | ok: [testbed-node-0] 2025-05-31 19:36:22.654640 | orchestrator | ok: [testbed-node-1] 2025-05-31 19:36:22.655572 | orchestrator | ok: [testbed-node-2] 2025-05-31 19:36:22.655991 | orchestrator | 2025-05-31 19:36:22.656618 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-05-31 19:36:22.656910 | orchestrator | Saturday 31 May 2025 19:36:22 +0000 (0:00:00.210) 0:00:19.893 ********** 2025-05-31 19:36:22.736894 | orchestrator | ok: [testbed-manager] 2025-05-31 19:36:22.760989 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:36:22.784717 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:36:22.812818 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:36:22.874182 | orchestrator | ok: [testbed-node-0] 2025-05-31 19:36:22.875110 | orchestrator | ok: [testbed-node-1] 2025-05-31 19:36:22.876147 | orchestrator | ok: [testbed-node-2] 2025-05-31 19:36:22.877459 | orchestrator | 2025-05-31 19:36:22.878785 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-05-31 19:36:22.881099 | orchestrator | Saturday 31 May 2025 19:36:22 +0000 (0:00:00.220) 0:00:20.113 ********** 2025-05-31 19:36:23.155921 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 19:36:23.156307 | orchestrator | 2025-05-31 19:36:23.157674 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-05-31 19:36:23.158198 | orchestrator | Saturday 31 May 2025 19:36:23 +0000 (0:00:00.278) 0:00:20.392 ********** 2025-05-31 19:36:23.669205 | orchestrator | ok: [testbed-manager] 2025-05-31 19:36:23.669310 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:36:23.669711 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:36:23.670498 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:36:23.673033 | orchestrator | ok: [testbed-node-0] 2025-05-31 19:36:23.673470 | orchestrator | ok: [testbed-node-1] 2025-05-31 19:36:23.674568 | orchestrator | ok: [testbed-node-2] 2025-05-31 19:36:23.675850 | orchestrator | 2025-05-31 19:36:23.677154 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-05-31 19:36:23.678576 | orchestrator | Saturday 31 May 2025 19:36:23 +0000 (0:00:00.513) 0:00:20.905 ********** 2025-05-31 19:36:23.742217 | orchestrator | skipping: [testbed-manager] 2025-05-31 19:36:23.766975 | orchestrator | skipping: [testbed-node-3] 2025-05-31 19:36:23.789613 | orchestrator | skipping: [testbed-node-4] 2025-05-31 19:36:23.814822 | orchestrator | skipping: [testbed-node-5] 2025-05-31 19:36:23.881461 | orchestrator | skipping: [testbed-node-0] 2025-05-31 19:36:23.881548 | orchestrator | skipping: [testbed-node-1] 2025-05-31 19:36:23.883687 | orchestrator | skipping: [testbed-node-2] 2025-05-31 19:36:23.883864 | orchestrator | 2025-05-31 19:36:23.884668 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-05-31 19:36:23.885466 | orchestrator | Saturday 31 May 2025 19:36:23 +0000 (0:00:00.214) 0:00:21.119 ********** 2025-05-31 19:36:24.918846 | orchestrator | ok: [testbed-manager] 2025-05-31 19:36:24.919523 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:36:24.920340 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:36:24.921260 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:36:24.922190 | orchestrator | changed: [testbed-node-1] 2025-05-31 19:36:24.922927 | orchestrator | changed: [testbed-node-2] 2025-05-31 19:36:24.923447 | orchestrator | changed: [testbed-node-0] 2025-05-31 19:36:24.924392 | orchestrator | 2025-05-31 19:36:24.924905 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-05-31 19:36:24.925497 | orchestrator | Saturday 31 May 2025 19:36:24 +0000 (0:00:01.035) 0:00:22.155 ********** 2025-05-31 19:36:25.438613 | orchestrator | ok: [testbed-manager] 2025-05-31 19:36:25.438781 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:36:25.438968 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:36:25.440968 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:36:25.442665 | orchestrator | ok: [testbed-node-0] 2025-05-31 19:36:25.442686 | orchestrator | ok: [testbed-node-1] 2025-05-31 19:36:25.442697 | orchestrator | ok: [testbed-node-2] 2025-05-31 19:36:25.442709 | orchestrator | 2025-05-31 19:36:25.443661 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-05-31 19:36:25.444103 | orchestrator | Saturday 31 May 2025 19:36:25 +0000 (0:00:00.521) 0:00:22.677 ********** 2025-05-31 19:36:26.533379 | orchestrator | ok: [testbed-manager] 2025-05-31 19:36:26.533601 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:36:26.533645 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:36:26.534306 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:36:26.537108 | orchestrator | changed: [testbed-node-0] 2025-05-31 19:36:26.539521 | orchestrator | changed: [testbed-node-1] 2025-05-31 19:36:26.539937 | orchestrator | changed: [testbed-node-2] 2025-05-31 19:36:26.540800 | orchestrator | 2025-05-31 19:36:26.541821 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-05-31 19:36:26.542209 | orchestrator | Saturday 31 May 2025 19:36:26 +0000 (0:00:01.093) 0:00:23.770 ********** 2025-05-31 19:36:40.918626 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:36:40.918745 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:36:40.918760 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:36:40.918833 | orchestrator | changed: [testbed-manager] 2025-05-31 19:36:40.919238 | orchestrator | changed: [testbed-node-2] 2025-05-31 19:36:40.919919 | orchestrator | changed: [testbed-node-0] 2025-05-31 19:36:40.920681 | orchestrator | changed: [testbed-node-1] 2025-05-31 19:36:40.920913 | orchestrator | 2025-05-31 19:36:40.921666 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-05-31 19:36:40.922166 | orchestrator | Saturday 31 May 2025 19:36:40 +0000 (0:00:14.353) 0:00:38.125 ********** 2025-05-31 19:36:40.986399 | orchestrator | ok: [testbed-manager] 2025-05-31 19:36:41.013706 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:36:41.038925 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:36:41.127415 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:36:41.183559 | orchestrator | ok: [testbed-node-0] 2025-05-31 19:36:41.184148 | orchestrator | ok: [testbed-node-1] 2025-05-31 19:36:41.184849 | orchestrator | ok: [testbed-node-2] 2025-05-31 19:36:41.186561 | orchestrator | 2025-05-31 19:36:41.187282 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-05-31 19:36:41.188225 | orchestrator | Saturday 31 May 2025 19:36:41 +0000 (0:00:00.297) 0:00:38.423 ********** 2025-05-31 19:36:41.287119 | orchestrator | ok: [testbed-manager] 2025-05-31 19:36:41.312320 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:36:41.341601 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:36:41.360009 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:36:41.410005 | orchestrator | ok: [testbed-node-0] 2025-05-31 19:36:41.410954 | orchestrator | ok: [testbed-node-1] 2025-05-31 19:36:41.411578 | orchestrator | ok: [testbed-node-2] 2025-05-31 19:36:41.412122 | orchestrator | 2025-05-31 19:36:41.412842 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-05-31 19:36:41.413545 | orchestrator | Saturday 31 May 2025 19:36:41 +0000 (0:00:00.226) 0:00:38.650 ********** 2025-05-31 19:36:41.483207 | orchestrator | ok: [testbed-manager] 2025-05-31 19:36:41.506621 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:36:41.528160 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:36:41.550208 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:36:41.612328 | orchestrator | ok: [testbed-node-0] 2025-05-31 19:36:41.614074 | orchestrator | ok: [testbed-node-1] 2025-05-31 19:36:41.616007 | orchestrator | ok: [testbed-node-2] 2025-05-31 19:36:41.617083 | orchestrator | 2025-05-31 19:36:41.618266 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-05-31 19:36:41.619412 | orchestrator | Saturday 31 May 2025 19:36:41 +0000 (0:00:00.200) 0:00:38.850 ********** 2025-05-31 19:36:41.857022 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 19:36:41.858263 | orchestrator | 2025-05-31 19:36:41.859697 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-05-31 19:36:41.860641 | orchestrator | Saturday 31 May 2025 19:36:41 +0000 (0:00:00.244) 0:00:39.094 ********** 2025-05-31 19:36:43.374128 | orchestrator | ok: [testbed-manager] 2025-05-31 19:36:43.375071 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:36:43.376013 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:36:43.377506 | orchestrator | ok: [testbed-node-1] 2025-05-31 19:36:43.378897 | orchestrator | ok: [testbed-node-2] 2025-05-31 19:36:43.379731 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:36:43.380451 | orchestrator | ok: [testbed-node-0] 2025-05-31 19:36:43.381052 | orchestrator | 2025-05-31 19:36:43.382167 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-05-31 19:36:43.382211 | orchestrator | Saturday 31 May 2025 19:36:43 +0000 (0:00:01.517) 0:00:40.611 ********** 2025-05-31 19:36:44.521822 | orchestrator | changed: [testbed-manager] 2025-05-31 19:36:44.521950 | orchestrator | changed: [testbed-node-3] 2025-05-31 19:36:44.522640 | orchestrator | changed: [testbed-node-4] 2025-05-31 19:36:44.522724 | orchestrator | changed: [testbed-node-5] 2025-05-31 19:36:44.523010 | orchestrator | changed: [testbed-node-0] 2025-05-31 19:36:44.524315 | orchestrator | changed: [testbed-node-2] 2025-05-31 19:36:44.524344 | orchestrator | changed: [testbed-node-1] 2025-05-31 19:36:44.524358 | orchestrator | 2025-05-31 19:36:44.524982 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-05-31 19:36:44.525826 | orchestrator | Saturday 31 May 2025 19:36:44 +0000 (0:00:01.146) 0:00:41.758 ********** 2025-05-31 19:36:45.337830 | orchestrator | ok: [testbed-manager] 2025-05-31 19:36:45.338907 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:36:45.342211 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:36:45.343039 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:36:45.343980 | orchestrator | ok: [testbed-node-0] 2025-05-31 19:36:45.344500 | orchestrator | ok: [testbed-node-1] 2025-05-31 19:36:45.345376 | orchestrator | ok: [testbed-node-2] 2025-05-31 19:36:45.345828 | orchestrator | 2025-05-31 19:36:45.347419 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-05-31 19:36:45.348546 | orchestrator | Saturday 31 May 2025 19:36:45 +0000 (0:00:00.818) 0:00:42.576 ********** 2025-05-31 19:36:45.607858 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 19:36:45.608918 | orchestrator | 2025-05-31 19:36:45.611858 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-05-31 19:36:45.612357 | orchestrator | Saturday 31 May 2025 19:36:45 +0000 (0:00:00.268) 0:00:42.845 ********** 2025-05-31 19:36:46.616371 | orchestrator | changed: [testbed-manager] 2025-05-31 19:36:46.616709 | orchestrator | changed: [testbed-node-3] 2025-05-31 19:36:46.616734 | orchestrator | changed: [testbed-node-5] 2025-05-31 19:36:46.617033 | orchestrator | changed: [testbed-node-4] 2025-05-31 19:36:46.617833 | orchestrator | changed: [testbed-node-1] 2025-05-31 19:36:46.618680 | orchestrator | changed: [testbed-node-2] 2025-05-31 19:36:46.618901 | orchestrator | changed: [testbed-node-0] 2025-05-31 19:36:46.619605 | orchestrator | 2025-05-31 19:36:46.620725 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-05-31 19:36:46.621124 | orchestrator | Saturday 31 May 2025 19:36:46 +0000 (0:00:01.009) 0:00:43.854 ********** 2025-05-31 19:36:46.723870 | orchestrator | skipping: [testbed-manager] 2025-05-31 19:36:46.750623 | orchestrator | skipping: [testbed-node-3] 2025-05-31 19:36:46.772377 | orchestrator | skipping: [testbed-node-4] 2025-05-31 19:36:46.901930 | orchestrator | skipping: [testbed-node-5] 2025-05-31 19:36:46.902116 | orchestrator | skipping: [testbed-node-0] 2025-05-31 19:36:46.902877 | orchestrator | skipping: [testbed-node-1] 2025-05-31 19:36:46.904390 | orchestrator | skipping: [testbed-node-2] 2025-05-31 19:36:46.905793 | orchestrator | 2025-05-31 19:36:46.906737 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-05-31 19:36:46.908061 | orchestrator | Saturday 31 May 2025 19:36:46 +0000 (0:00:00.285) 0:00:44.140 ********** 2025-05-31 19:36:58.128951 | orchestrator | changed: [testbed-node-3] 2025-05-31 19:36:58.129073 | orchestrator | changed: [testbed-node-1] 2025-05-31 19:36:58.129618 | orchestrator | changed: [testbed-node-2] 2025-05-31 19:36:58.130170 | orchestrator | changed: [testbed-node-4] 2025-05-31 19:36:58.132653 | orchestrator | changed: [testbed-node-5] 2025-05-31 19:36:58.133202 | orchestrator | changed: [testbed-node-0] 2025-05-31 19:36:58.134347 | orchestrator | changed: [testbed-manager] 2025-05-31 19:36:58.135064 | orchestrator | 2025-05-31 19:36:58.135717 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-05-31 19:36:58.135895 | orchestrator | Saturday 31 May 2025 19:36:58 +0000 (0:00:11.223) 0:00:55.363 ********** 2025-05-31 19:36:59.678014 | orchestrator | ok: [testbed-node-0] 2025-05-31 19:36:59.682245 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:36:59.682360 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:36:59.682375 | orchestrator | ok: [testbed-node-1] 2025-05-31 19:36:59.682486 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:36:59.683307 | orchestrator | ok: [testbed-manager] 2025-05-31 19:36:59.684511 | orchestrator | ok: [testbed-node-2] 2025-05-31 19:36:59.685337 | orchestrator | 2025-05-31 19:36:59.685483 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-05-31 19:36:59.686410 | orchestrator | Saturday 31 May 2025 19:36:59 +0000 (0:00:01.550) 0:00:56.914 ********** 2025-05-31 19:37:00.582361 | orchestrator | ok: [testbed-manager] 2025-05-31 19:37:00.584670 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:37:00.584707 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:37:00.584719 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:37:00.584730 | orchestrator | ok: [testbed-node-0] 2025-05-31 19:37:00.585328 | orchestrator | ok: [testbed-node-1] 2025-05-31 19:37:00.586232 | orchestrator | ok: [testbed-node-2] 2025-05-31 19:37:00.586702 | orchestrator | 2025-05-31 19:37:00.587245 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-05-31 19:37:00.587688 | orchestrator | Saturday 31 May 2025 19:37:00 +0000 (0:00:00.901) 0:00:57.816 ********** 2025-05-31 19:37:00.663941 | orchestrator | ok: [testbed-manager] 2025-05-31 19:37:00.697257 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:37:00.722092 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:37:00.750547 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:37:00.808517 | orchestrator | ok: [testbed-node-0] 2025-05-31 19:37:00.809391 | orchestrator | ok: [testbed-node-1] 2025-05-31 19:37:00.811089 | orchestrator | ok: [testbed-node-2] 2025-05-31 19:37:00.812186 | orchestrator | 2025-05-31 19:37:00.813490 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-05-31 19:37:00.815276 | orchestrator | Saturday 31 May 2025 19:37:00 +0000 (0:00:00.230) 0:00:58.046 ********** 2025-05-31 19:37:00.890907 | orchestrator | ok: [testbed-manager] 2025-05-31 19:37:00.916684 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:37:00.947665 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:37:00.972379 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:37:01.036931 | orchestrator | ok: [testbed-node-0] 2025-05-31 19:37:01.037670 | orchestrator | ok: [testbed-node-1] 2025-05-31 19:37:01.038715 | orchestrator | ok: [testbed-node-2] 2025-05-31 19:37:01.041114 | orchestrator | 2025-05-31 19:37:01.041510 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-05-31 19:37:01.042600 | orchestrator | Saturday 31 May 2025 19:37:01 +0000 (0:00:00.227) 0:00:58.274 ********** 2025-05-31 19:37:01.322635 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 19:37:01.324071 | orchestrator | 2025-05-31 19:37:01.325604 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-05-31 19:37:01.326628 | orchestrator | Saturday 31 May 2025 19:37:01 +0000 (0:00:00.283) 0:00:58.558 ********** 2025-05-31 19:37:02.752548 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:37:02.752904 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:37:02.754165 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:37:02.755242 | orchestrator | ok: [testbed-node-1] 2025-05-31 19:37:02.756522 | orchestrator | ok: [testbed-node-2] 2025-05-31 19:37:02.757815 | orchestrator | ok: [testbed-node-0] 2025-05-31 19:37:02.758600 | orchestrator | ok: [testbed-manager] 2025-05-31 19:37:02.759589 | orchestrator | 2025-05-31 19:37:02.759979 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-05-31 19:37:02.761222 | orchestrator | Saturday 31 May 2025 19:37:02 +0000 (0:00:01.429) 0:00:59.988 ********** 2025-05-31 19:37:03.303219 | orchestrator | changed: [testbed-manager] 2025-05-31 19:37:03.303314 | orchestrator | changed: [testbed-node-1] 2025-05-31 19:37:03.304732 | orchestrator | changed: [testbed-node-3] 2025-05-31 19:37:03.304756 | orchestrator | changed: [testbed-node-5] 2025-05-31 19:37:03.305402 | orchestrator | changed: [testbed-node-0] 2025-05-31 19:37:03.306891 | orchestrator | changed: [testbed-node-4] 2025-05-31 19:37:03.307841 | orchestrator | changed: [testbed-node-2] 2025-05-31 19:37:03.308406 | orchestrator | 2025-05-31 19:37:03.309091 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-05-31 19:37:03.309544 | orchestrator | Saturday 31 May 2025 19:37:03 +0000 (0:00:00.552) 0:01:00.540 ********** 2025-05-31 19:37:03.378429 | orchestrator | ok: [testbed-manager] 2025-05-31 19:37:03.407734 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:37:03.428325 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:37:03.507683 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:37:03.508331 | orchestrator | ok: [testbed-node-0] 2025-05-31 19:37:03.508951 | orchestrator | ok: [testbed-node-1] 2025-05-31 19:37:03.509999 | orchestrator | ok: [testbed-node-2] 2025-05-31 19:37:03.510534 | orchestrator | 2025-05-31 19:37:03.511078 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-05-31 19:37:03.511605 | orchestrator | Saturday 31 May 2025 19:37:03 +0000 (0:00:00.206) 0:01:00.747 ********** 2025-05-31 19:37:04.468805 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:37:04.468893 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:37:04.469124 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:37:04.470059 | orchestrator | ok: [testbed-node-1] 2025-05-31 19:37:04.470712 | orchestrator | ok: [testbed-node-2] 2025-05-31 19:37:04.471340 | orchestrator | ok: [testbed-node-0] 2025-05-31 19:37:04.472059 | orchestrator | ok: [testbed-manager] 2025-05-31 19:37:04.473513 | orchestrator | 2025-05-31 19:37:04.473544 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-05-31 19:37:04.474160 | orchestrator | Saturday 31 May 2025 19:37:04 +0000 (0:00:00.958) 0:01:01.705 ********** 2025-05-31 19:37:06.008948 | orchestrator | changed: [testbed-node-4] 2025-05-31 19:37:06.009705 | orchestrator | changed: [testbed-node-5] 2025-05-31 19:37:06.010179 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:37:06.012277 | orchestrator | changed: [testbed-node-2] 2025-05-31 19:37:06.012853 | orchestrator | changed: [testbed-manager] 2025-05-31 19:37:06.013666 | orchestrator | ok: [testbed-node-0] 2025-05-31 19:37:06.014867 | orchestrator | ok: [testbed-node-1] 2025-05-31 19:37:06.015803 | orchestrator | 2025-05-31 19:37:06.015840 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-05-31 19:37:06.017816 | orchestrator | Saturday 31 May 2025 19:37:05 +0000 (0:00:01.539) 0:01:03.245 ********** 2025-05-31 19:37:17.349700 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:37:17.349820 | orchestrator | ok: [testbed-node-2] 2025-05-31 19:37:17.351283 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:37:17.352040 | orchestrator | ok: [testbed-manager] 2025-05-31 19:37:17.353114 | orchestrator | changed: [testbed-node-3] 2025-05-31 19:37:17.354942 | orchestrator | changed: [testbed-node-1] 2025-05-31 19:37:17.355917 | orchestrator | changed: [testbed-node-0] 2025-05-31 19:37:17.356556 | orchestrator | 2025-05-31 19:37:17.357498 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-05-31 19:37:17.358180 | orchestrator | Saturday 31 May 2025 19:37:17 +0000 (0:00:11.339) 0:01:14.584 ********** 2025-05-31 19:37:53.240899 | orchestrator | ok: [testbed-manager] 2025-05-31 19:37:53.240971 | orchestrator | ok: [testbed-node-1] 2025-05-31 19:37:53.240981 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:37:53.240990 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:37:53.242095 | orchestrator | ok: [testbed-node-2] 2025-05-31 19:37:53.242570 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:37:53.244006 | orchestrator | ok: [testbed-node-0] 2025-05-31 19:37:53.244910 | orchestrator | 2025-05-31 19:37:53.245355 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-05-31 19:37:53.246581 | orchestrator | Saturday 31 May 2025 19:37:53 +0000 (0:00:35.889) 0:01:50.474 ********** 2025-05-31 19:39:11.474820 | orchestrator | changed: [testbed-manager] 2025-05-31 19:39:11.474933 | orchestrator | changed: [testbed-node-4] 2025-05-31 19:39:11.474944 | orchestrator | changed: [testbed-node-3] 2025-05-31 19:39:11.474953 | orchestrator | changed: [testbed-node-2] 2025-05-31 19:39:11.474962 | orchestrator | changed: [testbed-node-1] 2025-05-31 19:39:11.475067 | orchestrator | changed: [testbed-node-5] 2025-05-31 19:39:11.475081 | orchestrator | changed: [testbed-node-0] 2025-05-31 19:39:11.475088 | orchestrator | 2025-05-31 19:39:11.475095 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-05-31 19:39:11.475512 | orchestrator | Saturday 31 May 2025 19:39:11 +0000 (0:01:18.230) 0:03:08.705 ********** 2025-05-31 19:39:13.216691 | orchestrator | ok: [testbed-manager] 2025-05-31 19:39:13.216798 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:39:13.216921 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:39:13.218785 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:39:13.219362 | orchestrator | ok: [testbed-node-0] 2025-05-31 19:39:13.221036 | orchestrator | ok: [testbed-node-1] 2025-05-31 19:39:13.221657 | orchestrator | ok: [testbed-node-2] 2025-05-31 19:39:13.222579 | orchestrator | 2025-05-31 19:39:13.223075 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-05-31 19:39:13.223799 | orchestrator | Saturday 31 May 2025 19:39:13 +0000 (0:00:01.747) 0:03:10.453 ********** 2025-05-31 19:39:24.837852 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:39:24.935839 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:39:24.935948 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:39:24.935961 | orchestrator | ok: [testbed-node-1] 2025-05-31 19:39:24.935972 | orchestrator | ok: [testbed-node-2] 2025-05-31 19:39:24.935982 | orchestrator | ok: [testbed-node-0] 2025-05-31 19:39:24.935993 | orchestrator | changed: [testbed-manager] 2025-05-31 19:39:24.936004 | orchestrator | 2025-05-31 19:39:24.936015 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-05-31 19:39:24.936027 | orchestrator | Saturday 31 May 2025 19:39:24 +0000 (0:00:11.617) 0:03:22.071 ********** 2025-05-31 19:39:25.237011 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-05-31 19:39:25.237145 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-05-31 19:39:25.237993 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-05-31 19:39:25.239575 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-05-31 19:39:25.239687 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-05-31 19:39:25.240959 | orchestrator | 2025-05-31 19:39:25.241010 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-05-31 19:39:25.242004 | orchestrator | Saturday 31 May 2025 19:39:25 +0000 (0:00:00.403) 0:03:22.474 ********** 2025-05-31 19:39:25.296149 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-05-31 19:39:25.296364 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-05-31 19:39:25.324348 | orchestrator | skipping: [testbed-manager] 2025-05-31 19:39:25.364944 | orchestrator | skipping: [testbed-node-3] 2025-05-31 19:39:25.365438 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-05-31 19:39:25.366591 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-05-31 19:39:25.391673 | orchestrator | skipping: [testbed-node-4] 2025-05-31 19:39:25.416148 | orchestrator | skipping: [testbed-node-5] 2025-05-31 19:39:26.958295 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-31 19:39:26.958850 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-31 19:39:26.960865 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-31 19:39:26.962091 | orchestrator | 2025-05-31 19:39:26.962736 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-05-31 19:39:26.963862 | orchestrator | Saturday 31 May 2025 19:39:26 +0000 (0:00:01.719) 0:03:24.194 ********** 2025-05-31 19:39:27.018010 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-05-31 19:39:27.019756 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-05-31 19:39:27.019864 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-05-31 19:39:27.020833 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-05-31 19:39:27.023876 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-05-31 19:39:27.064385 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-05-31 19:39:27.064640 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-05-31 19:39:27.064706 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-05-31 19:39:27.065737 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-05-31 19:39:27.066696 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-05-31 19:39:27.067337 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-05-31 19:39:27.068734 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-05-31 19:39:27.070405 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-05-31 19:39:27.071063 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-05-31 19:39:27.072411 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-05-31 19:39:27.072827 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-05-31 19:39:27.107751 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-05-31 19:39:27.107977 | orchestrator | skipping: [testbed-manager] 2025-05-31 19:39:27.109509 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-05-31 19:39:27.109805 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-05-31 19:39:27.110279 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-05-31 19:39:27.110486 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-05-31 19:39:27.110630 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-05-31 19:39:27.110891 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-05-31 19:39:27.111230 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-05-31 19:39:27.111507 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-05-31 19:39:27.112986 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-05-31 19:39:27.113063 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-05-31 19:39:27.113392 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-05-31 19:39:27.113810 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-05-31 19:39:27.113924 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-05-31 19:39:27.114172 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-05-31 19:39:27.114384 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-05-31 19:39:27.146169 | orchestrator | skipping: [testbed-node-3] 2025-05-31 19:39:27.146265 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-05-31 19:39:27.146360 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-05-31 19:39:27.146370 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-05-31 19:39:27.146678 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-05-31 19:39:27.146795 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-05-31 19:39:27.147053 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-05-31 19:39:27.148583 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-05-31 19:39:27.148607 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-05-31 19:39:27.173156 | orchestrator | skipping: [testbed-node-4] 2025-05-31 19:39:31.936428 | orchestrator | skipping: [testbed-node-5] 2025-05-31 19:39:31.937144 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-05-31 19:39:31.938584 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-05-31 19:39:31.940287 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-05-31 19:39:31.941385 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-05-31 19:39:31.942911 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-05-31 19:39:31.943785 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-05-31 19:39:31.945082 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-05-31 19:39:31.945991 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-05-31 19:39:31.947294 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-05-31 19:39:31.948277 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-05-31 19:39:31.949026 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-05-31 19:39:31.950110 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-05-31 19:39:31.951915 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-05-31 19:39:31.952908 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-05-31 19:39:31.953352 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-05-31 19:39:31.954145 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-05-31 19:39:31.954914 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-05-31 19:39:31.955404 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-05-31 19:39:31.956400 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-05-31 19:39:31.957020 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-05-31 19:39:31.957656 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-05-31 19:39:31.958113 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-05-31 19:39:31.958818 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-05-31 19:39:31.959232 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-05-31 19:39:31.962621 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-05-31 19:39:31.962687 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-05-31 19:39:31.962702 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-05-31 19:39:31.962714 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-05-31 19:39:31.962904 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-05-31 19:39:31.963239 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-05-31 19:39:31.963993 | orchestrator | 2025-05-31 19:39:31.964568 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-05-31 19:39:31.965017 | orchestrator | Saturday 31 May 2025 19:39:31 +0000 (0:00:04.979) 0:03:29.173 ********** 2025-05-31 19:39:33.514335 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-31 19:39:33.517426 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-31 19:39:33.517475 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-31 19:39:33.517487 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-31 19:39:33.517991 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-31 19:39:33.519231 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-31 19:39:33.519971 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-31 19:39:33.521188 | orchestrator | 2025-05-31 19:39:33.521883 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-05-31 19:39:33.522634 | orchestrator | Saturday 31 May 2025 19:39:33 +0000 (0:00:01.578) 0:03:30.752 ********** 2025-05-31 19:39:33.567038 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-05-31 19:39:33.597484 | orchestrator | skipping: [testbed-manager] 2025-05-31 19:39:33.680731 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-05-31 19:39:33.680891 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-05-31 19:39:34.018908 | orchestrator | skipping: [testbed-node-0] 2025-05-31 19:39:34.019472 | orchestrator | skipping: [testbed-node-1] 2025-05-31 19:39:34.023674 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-05-31 19:39:34.026471 | orchestrator | skipping: [testbed-node-2] 2025-05-31 19:39:34.026507 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-05-31 19:39:34.026516 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-05-31 19:39:34.026524 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-05-31 19:39:34.026846 | orchestrator | 2025-05-31 19:39:34.027673 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-05-31 19:39:34.028775 | orchestrator | Saturday 31 May 2025 19:39:34 +0000 (0:00:00.505) 0:03:31.257 ********** 2025-05-31 19:39:34.071566 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-05-31 19:39:34.095061 | orchestrator | skipping: [testbed-manager] 2025-05-31 19:39:34.171998 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-05-31 19:39:34.172077 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-05-31 19:39:34.589365 | orchestrator | skipping: [testbed-node-0] 2025-05-31 19:39:34.590599 | orchestrator | skipping: [testbed-node-1] 2025-05-31 19:39:34.591877 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-05-31 19:39:34.592863 | orchestrator | skipping: [testbed-node-2] 2025-05-31 19:39:34.592965 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-05-31 19:39:34.594189 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-05-31 19:39:34.594880 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-05-31 19:39:34.596432 | orchestrator | 2025-05-31 19:39:34.596679 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-05-31 19:39:34.597032 | orchestrator | Saturday 31 May 2025 19:39:34 +0000 (0:00:00.570) 0:03:31.828 ********** 2025-05-31 19:39:34.666760 | orchestrator | skipping: [testbed-manager] 2025-05-31 19:39:34.692940 | orchestrator | skipping: [testbed-node-3] 2025-05-31 19:39:34.721948 | orchestrator | skipping: [testbed-node-4] 2025-05-31 19:39:34.743200 | orchestrator | skipping: [testbed-node-5] 2025-05-31 19:39:34.850146 | orchestrator | skipping: [testbed-node-0] 2025-05-31 19:39:34.851021 | orchestrator | skipping: [testbed-node-1] 2025-05-31 19:39:34.851823 | orchestrator | skipping: [testbed-node-2] 2025-05-31 19:39:34.853177 | orchestrator | 2025-05-31 19:39:34.854857 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-05-31 19:39:34.856246 | orchestrator | Saturday 31 May 2025 19:39:34 +0000 (0:00:00.260) 0:03:32.089 ********** 2025-05-31 19:39:40.337520 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:39:40.344867 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:39:40.344938 | orchestrator | ok: [testbed-manager] 2025-05-31 19:39:40.344945 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:39:40.346805 | orchestrator | ok: [testbed-node-1] 2025-05-31 19:39:40.347587 | orchestrator | ok: [testbed-node-2] 2025-05-31 19:39:40.348211 | orchestrator | ok: [testbed-node-0] 2025-05-31 19:39:40.348564 | orchestrator | 2025-05-31 19:39:40.349005 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-05-31 19:39:40.349469 | orchestrator | Saturday 31 May 2025 19:39:40 +0000 (0:00:05.484) 0:03:37.573 ********** 2025-05-31 19:39:40.394390 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-05-31 19:39:40.428032 | orchestrator | skipping: [testbed-manager] 2025-05-31 19:39:40.428202 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-05-31 19:39:40.463233 | orchestrator | skipping: [testbed-node-3] 2025-05-31 19:39:40.463729 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-05-31 19:39:40.500229 | orchestrator | skipping: [testbed-node-4] 2025-05-31 19:39:40.539159 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-05-31 19:39:40.539906 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-05-31 19:39:40.567678 | orchestrator | skipping: [testbed-node-5] 2025-05-31 19:39:40.637414 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-05-31 19:39:40.637665 | orchestrator | skipping: [testbed-node-0] 2025-05-31 19:39:40.638912 | orchestrator | skipping: [testbed-node-1] 2025-05-31 19:39:40.639610 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-05-31 19:39:40.640222 | orchestrator | skipping: [testbed-node-2] 2025-05-31 19:39:40.640993 | orchestrator | 2025-05-31 19:39:40.641758 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-05-31 19:39:40.642579 | orchestrator | Saturday 31 May 2025 19:39:40 +0000 (0:00:00.301) 0:03:37.875 ********** 2025-05-31 19:39:41.705361 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-05-31 19:39:41.706922 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-05-31 19:39:41.707515 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-05-31 19:39:41.708635 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-05-31 19:39:41.708979 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-05-31 19:39:41.709846 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-05-31 19:39:41.710353 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-05-31 19:39:41.711048 | orchestrator | 2025-05-31 19:39:41.711700 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-05-31 19:39:41.712034 | orchestrator | Saturday 31 May 2025 19:39:41 +0000 (0:00:01.067) 0:03:38.942 ********** 2025-05-31 19:39:42.116981 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 19:39:42.118183 | orchestrator | 2025-05-31 19:39:42.118957 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-05-31 19:39:42.119278 | orchestrator | Saturday 31 May 2025 19:39:42 +0000 (0:00:00.412) 0:03:39.355 ********** 2025-05-31 19:39:43.479617 | orchestrator | ok: [testbed-manager] 2025-05-31 19:39:43.480645 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:39:43.481488 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:39:43.482412 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:39:43.483834 | orchestrator | ok: [testbed-node-1] 2025-05-31 19:39:43.483937 | orchestrator | ok: [testbed-node-2] 2025-05-31 19:39:43.484616 | orchestrator | ok: [testbed-node-0] 2025-05-31 19:39:43.484920 | orchestrator | 2025-05-31 19:39:43.485631 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-05-31 19:39:43.486222 | orchestrator | Saturday 31 May 2025 19:39:43 +0000 (0:00:01.360) 0:03:40.716 ********** 2025-05-31 19:39:44.099116 | orchestrator | ok: [testbed-manager] 2025-05-31 19:39:44.101860 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:39:44.101913 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:39:44.101931 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:39:44.101949 | orchestrator | ok: [testbed-node-1] 2025-05-31 19:39:44.102689 | orchestrator | ok: [testbed-node-0] 2025-05-31 19:39:44.103797 | orchestrator | ok: [testbed-node-2] 2025-05-31 19:39:44.104309 | orchestrator | 2025-05-31 19:39:44.104931 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-05-31 19:39:44.105525 | orchestrator | Saturday 31 May 2025 19:39:44 +0000 (0:00:00.620) 0:03:41.336 ********** 2025-05-31 19:39:44.698457 | orchestrator | changed: [testbed-manager] 2025-05-31 19:39:44.700669 | orchestrator | changed: [testbed-node-3] 2025-05-31 19:39:44.700746 | orchestrator | changed: [testbed-node-4] 2025-05-31 19:39:44.700762 | orchestrator | changed: [testbed-node-5] 2025-05-31 19:39:44.702818 | orchestrator | changed: [testbed-node-0] 2025-05-31 19:39:44.702869 | orchestrator | changed: [testbed-node-1] 2025-05-31 19:39:44.703356 | orchestrator | changed: [testbed-node-2] 2025-05-31 19:39:44.703889 | orchestrator | 2025-05-31 19:39:44.704645 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-05-31 19:39:44.705065 | orchestrator | Saturday 31 May 2025 19:39:44 +0000 (0:00:00.598) 0:03:41.934 ********** 2025-05-31 19:39:45.304250 | orchestrator | ok: [testbed-manager] 2025-05-31 19:39:45.307176 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:39:45.307348 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:39:45.308876 | orchestrator | ok: [testbed-node-0] 2025-05-31 19:39:45.310201 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:39:45.311108 | orchestrator | ok: [testbed-node-1] 2025-05-31 19:39:45.312034 | orchestrator | ok: [testbed-node-2] 2025-05-31 19:39:45.312695 | orchestrator | 2025-05-31 19:39:45.313404 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-05-31 19:39:45.314000 | orchestrator | Saturday 31 May 2025 19:39:45 +0000 (0:00:00.606) 0:03:42.541 ********** 2025-05-31 19:39:46.259064 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748719075.1884277, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-31 19:39:46.260061 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748719123.8290076, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-31 19:39:46.261006 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748719119.151067, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-31 19:39:46.262095 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748719116.2741115, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-31 19:39:46.263524 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748719154.7788167, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-31 19:39:46.264451 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748719140.9431973, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-31 19:39:46.265503 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748719137.4184074, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-31 19:39:46.266092 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748719097.518652, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-31 19:39:46.266957 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748719044.036211, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-31 19:39:46.267962 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748719032.447467, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-31 19:39:46.268801 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748719039.7704458, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-31 19:39:46.269442 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748719034.013621, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-31 19:39:46.270318 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748719054.2819138, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-31 19:39:46.270725 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748719037.0919948, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-31 19:39:46.272375 | orchestrator | 2025-05-31 19:39:46.273168 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-05-31 19:39:46.273852 | orchestrator | Saturday 31 May 2025 19:39:46 +0000 (0:00:00.955) 0:03:43.497 ********** 2025-05-31 19:39:47.367613 | orchestrator | changed: [testbed-manager] 2025-05-31 19:39:47.367801 | orchestrator | changed: [testbed-node-3] 2025-05-31 19:39:47.368382 | orchestrator | changed: [testbed-node-4] 2025-05-31 19:39:47.370164 | orchestrator | changed: [testbed-node-5] 2025-05-31 19:39:47.370605 | orchestrator | changed: [testbed-node-0] 2025-05-31 19:39:47.371774 | orchestrator | changed: [testbed-node-1] 2025-05-31 19:39:47.372920 | orchestrator | changed: [testbed-node-2] 2025-05-31 19:39:47.373839 | orchestrator | 2025-05-31 19:39:47.374293 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-05-31 19:39:47.375327 | orchestrator | Saturday 31 May 2025 19:39:47 +0000 (0:00:01.108) 0:03:44.605 ********** 2025-05-31 19:39:48.534488 | orchestrator | changed: [testbed-manager] 2025-05-31 19:39:48.535348 | orchestrator | changed: [testbed-node-3] 2025-05-31 19:39:48.538281 | orchestrator | changed: [testbed-node-4] 2025-05-31 19:39:48.539228 | orchestrator | changed: [testbed-node-5] 2025-05-31 19:39:48.540668 | orchestrator | changed: [testbed-node-1] 2025-05-31 19:39:48.541138 | orchestrator | changed: [testbed-node-2] 2025-05-31 19:39:48.542196 | orchestrator | changed: [testbed-node-0] 2025-05-31 19:39:48.543109 | orchestrator | 2025-05-31 19:39:48.543602 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2025-05-31 19:39:48.544063 | orchestrator | Saturday 31 May 2025 19:39:48 +0000 (0:00:01.165) 0:03:45.771 ********** 2025-05-31 19:39:49.649739 | orchestrator | changed: [testbed-manager] 2025-05-31 19:39:49.649906 | orchestrator | changed: [testbed-node-3] 2025-05-31 19:39:49.650534 | orchestrator | changed: [testbed-node-4] 2025-05-31 19:39:49.651718 | orchestrator | changed: [testbed-node-5] 2025-05-31 19:39:49.653068 | orchestrator | changed: [testbed-node-1] 2025-05-31 19:39:49.655906 | orchestrator | changed: [testbed-node-0] 2025-05-31 19:39:49.657241 | orchestrator | changed: [testbed-node-2] 2025-05-31 19:39:49.657538 | orchestrator | 2025-05-31 19:39:49.658921 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-05-31 19:39:49.659397 | orchestrator | Saturday 31 May 2025 19:39:49 +0000 (0:00:01.110) 0:03:46.882 ********** 2025-05-31 19:39:49.709665 | orchestrator | skipping: [testbed-manager] 2025-05-31 19:39:49.740638 | orchestrator | skipping: [testbed-node-3] 2025-05-31 19:39:49.782419 | orchestrator | skipping: [testbed-node-4] 2025-05-31 19:39:49.826258 | orchestrator | skipping: [testbed-node-5] 2025-05-31 19:39:49.859486 | orchestrator | skipping: [testbed-node-0] 2025-05-31 19:39:49.918003 | orchestrator | skipping: [testbed-node-1] 2025-05-31 19:39:49.924931 | orchestrator | skipping: [testbed-node-2] 2025-05-31 19:39:49.924990 | orchestrator | 2025-05-31 19:39:49.925885 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-05-31 19:39:49.925921 | orchestrator | Saturday 31 May 2025 19:39:49 +0000 (0:00:00.274) 0:03:47.156 ********** 2025-05-31 19:39:50.627082 | orchestrator | ok: [testbed-manager] 2025-05-31 19:39:50.629131 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:39:50.629395 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:39:50.630487 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:39:50.631647 | orchestrator | ok: [testbed-node-0] 2025-05-31 19:39:50.632721 | orchestrator | ok: [testbed-node-1] 2025-05-31 19:39:50.633649 | orchestrator | ok: [testbed-node-2] 2025-05-31 19:39:50.634645 | orchestrator | 2025-05-31 19:39:50.635569 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-05-31 19:39:50.636698 | orchestrator | Saturday 31 May 2025 19:39:50 +0000 (0:00:00.708) 0:03:47.864 ********** 2025-05-31 19:39:50.992340 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 19:39:50.992751 | orchestrator | 2025-05-31 19:39:50.993739 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-05-31 19:39:50.995072 | orchestrator | Saturday 31 May 2025 19:39:50 +0000 (0:00:00.364) 0:03:48.229 ********** 2025-05-31 19:39:58.938090 | orchestrator | ok: [testbed-manager] 2025-05-31 19:39:58.939394 | orchestrator | changed: [testbed-node-4] 2025-05-31 19:39:58.940697 | orchestrator | changed: [testbed-node-5] 2025-05-31 19:39:58.941582 | orchestrator | changed: [testbed-node-3] 2025-05-31 19:39:58.943055 | orchestrator | changed: [testbed-node-1] 2025-05-31 19:39:58.943935 | orchestrator | changed: [testbed-node-2] 2025-05-31 19:39:58.944524 | orchestrator | changed: [testbed-node-0] 2025-05-31 19:39:58.945976 | orchestrator | 2025-05-31 19:39:58.946838 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-05-31 19:39:58.948048 | orchestrator | Saturday 31 May 2025 19:39:58 +0000 (0:00:07.945) 0:03:56.174 ********** 2025-05-31 19:40:00.227628 | orchestrator | ok: [testbed-manager] 2025-05-31 19:40:00.228822 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:40:00.229321 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:40:00.230679 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:40:00.230996 | orchestrator | ok: [testbed-node-1] 2025-05-31 19:40:00.231790 | orchestrator | ok: [testbed-node-2] 2025-05-31 19:40:00.232462 | orchestrator | ok: [testbed-node-0] 2025-05-31 19:40:00.233627 | orchestrator | 2025-05-31 19:40:00.233945 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-05-31 19:40:00.235142 | orchestrator | Saturday 31 May 2025 19:40:00 +0000 (0:00:01.291) 0:03:57.465 ********** 2025-05-31 19:40:01.260250 | orchestrator | ok: [testbed-manager] 2025-05-31 19:40:01.263442 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:40:01.263861 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:40:01.264733 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:40:01.265679 | orchestrator | ok: [testbed-node-0] 2025-05-31 19:40:01.266623 | orchestrator | ok: [testbed-node-1] 2025-05-31 19:40:01.268611 | orchestrator | ok: [testbed-node-2] 2025-05-31 19:40:01.269476 | orchestrator | 2025-05-31 19:40:01.270088 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-05-31 19:40:01.271060 | orchestrator | Saturday 31 May 2025 19:40:01 +0000 (0:00:01.031) 0:03:58.496 ********** 2025-05-31 19:40:01.724437 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 19:40:01.724663 | orchestrator | 2025-05-31 19:40:01.725470 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-05-31 19:40:01.725996 | orchestrator | Saturday 31 May 2025 19:40:01 +0000 (0:00:00.465) 0:03:58.962 ********** 2025-05-31 19:40:10.237013 | orchestrator | changed: [testbed-manager] 2025-05-31 19:40:10.237575 | orchestrator | changed: [testbed-node-3] 2025-05-31 19:40:10.238800 | orchestrator | changed: [testbed-node-1] 2025-05-31 19:40:10.243173 | orchestrator | changed: [testbed-node-5] 2025-05-31 19:40:10.243836 | orchestrator | changed: [testbed-node-4] 2025-05-31 19:40:10.244036 | orchestrator | changed: [testbed-node-2] 2025-05-31 19:40:10.244713 | orchestrator | changed: [testbed-node-0] 2025-05-31 19:40:10.245128 | orchestrator | 2025-05-31 19:40:10.245705 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-05-31 19:40:10.246164 | orchestrator | Saturday 31 May 2025 19:40:10 +0000 (0:00:08.511) 0:04:07.473 ********** 2025-05-31 19:40:10.951950 | orchestrator | changed: [testbed-manager] 2025-05-31 19:40:10.952035 | orchestrator | changed: [testbed-node-3] 2025-05-31 19:40:10.952409 | orchestrator | changed: [testbed-node-4] 2025-05-31 19:40:10.953705 | orchestrator | changed: [testbed-node-5] 2025-05-31 19:40:10.954306 | orchestrator | changed: [testbed-node-0] 2025-05-31 19:40:10.955000 | orchestrator | changed: [testbed-node-1] 2025-05-31 19:40:10.955982 | orchestrator | changed: [testbed-node-2] 2025-05-31 19:40:10.956376 | orchestrator | 2025-05-31 19:40:10.956913 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-05-31 19:40:10.957393 | orchestrator | Saturday 31 May 2025 19:40:10 +0000 (0:00:00.715) 0:04:08.189 ********** 2025-05-31 19:40:12.104719 | orchestrator | changed: [testbed-manager] 2025-05-31 19:40:12.105675 | orchestrator | changed: [testbed-node-3] 2025-05-31 19:40:12.106910 | orchestrator | changed: [testbed-node-4] 2025-05-31 19:40:12.107745 | orchestrator | changed: [testbed-node-5] 2025-05-31 19:40:12.108700 | orchestrator | changed: [testbed-node-0] 2025-05-31 19:40:12.109687 | orchestrator | changed: [testbed-node-1] 2025-05-31 19:40:12.110863 | orchestrator | changed: [testbed-node-2] 2025-05-31 19:40:12.111805 | orchestrator | 2025-05-31 19:40:12.112389 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-05-31 19:40:12.113936 | orchestrator | Saturday 31 May 2025 19:40:12 +0000 (0:00:01.152) 0:04:09.341 ********** 2025-05-31 19:40:13.192876 | orchestrator | changed: [testbed-manager] 2025-05-31 19:40:13.193144 | orchestrator | changed: [testbed-node-3] 2025-05-31 19:40:13.197871 | orchestrator | changed: [testbed-node-4] 2025-05-31 19:40:13.197927 | orchestrator | changed: [testbed-node-5] 2025-05-31 19:40:13.197938 | orchestrator | changed: [testbed-node-0] 2025-05-31 19:40:13.197949 | orchestrator | changed: [testbed-node-1] 2025-05-31 19:40:13.197959 | orchestrator | changed: [testbed-node-2] 2025-05-31 19:40:13.198516 | orchestrator | 2025-05-31 19:40:13.198572 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-05-31 19:40:13.199599 | orchestrator | Saturday 31 May 2025 19:40:13 +0000 (0:00:01.087) 0:04:10.429 ********** 2025-05-31 19:40:13.299655 | orchestrator | ok: [testbed-manager] 2025-05-31 19:40:13.335223 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:40:13.369685 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:40:13.402622 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:40:13.457281 | orchestrator | ok: [testbed-node-0] 2025-05-31 19:40:13.457876 | orchestrator | ok: [testbed-node-1] 2025-05-31 19:40:13.458396 | orchestrator | ok: [testbed-node-2] 2025-05-31 19:40:13.459605 | orchestrator | 2025-05-31 19:40:13.459627 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-05-31 19:40:13.460672 | orchestrator | Saturday 31 May 2025 19:40:13 +0000 (0:00:00.268) 0:04:10.697 ********** 2025-05-31 19:40:13.568478 | orchestrator | ok: [testbed-manager] 2025-05-31 19:40:13.601678 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:40:13.638247 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:40:13.676076 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:40:13.739927 | orchestrator | ok: [testbed-node-0] 2025-05-31 19:40:13.740113 | orchestrator | ok: [testbed-node-1] 2025-05-31 19:40:13.740238 | orchestrator | ok: [testbed-node-2] 2025-05-31 19:40:13.741335 | orchestrator | 2025-05-31 19:40:13.742493 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-05-31 19:40:13.742563 | orchestrator | Saturday 31 May 2025 19:40:13 +0000 (0:00:00.281) 0:04:10.978 ********** 2025-05-31 19:40:13.841028 | orchestrator | ok: [testbed-manager] 2025-05-31 19:40:13.874144 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:40:13.907746 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:40:13.936746 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:40:14.008125 | orchestrator | ok: [testbed-node-0] 2025-05-31 19:40:14.008520 | orchestrator | ok: [testbed-node-1] 2025-05-31 19:40:14.009506 | orchestrator | ok: [testbed-node-2] 2025-05-31 19:40:14.009973 | orchestrator | 2025-05-31 19:40:14.011519 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-05-31 19:40:14.012095 | orchestrator | Saturday 31 May 2025 19:40:13 +0000 (0:00:00.269) 0:04:11.248 ********** 2025-05-31 19:40:19.731891 | orchestrator | ok: [testbed-manager] 2025-05-31 19:40:19.732123 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:40:19.733291 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:40:19.740863 | orchestrator | ok: [testbed-node-2] 2025-05-31 19:40:19.741558 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:40:19.742941 | orchestrator | ok: [testbed-node-0] 2025-05-31 19:40:19.744586 | orchestrator | ok: [testbed-node-1] 2025-05-31 19:40:19.744836 | orchestrator | 2025-05-31 19:40:19.745645 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-05-31 19:40:19.746232 | orchestrator | Saturday 31 May 2025 19:40:19 +0000 (0:00:05.723) 0:04:16.971 ********** 2025-05-31 19:40:20.103330 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 19:40:20.104592 | orchestrator | 2025-05-31 19:40:20.105834 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-05-31 19:40:20.106800 | orchestrator | Saturday 31 May 2025 19:40:20 +0000 (0:00:00.370) 0:04:17.342 ********** 2025-05-31 19:40:20.186696 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-05-31 19:40:20.186843 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-05-31 19:40:20.186928 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-05-31 19:40:20.240683 | orchestrator | skipping: [testbed-manager] 2025-05-31 19:40:20.241266 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-05-31 19:40:20.241906 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-05-31 19:40:20.243233 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-05-31 19:40:20.273483 | orchestrator | skipping: [testbed-node-3] 2025-05-31 19:40:20.338154 | orchestrator | skipping: [testbed-node-4] 2025-05-31 19:40:20.338735 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-05-31 19:40:20.338986 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-05-31 19:40:20.340191 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-05-31 19:40:20.340516 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-05-31 19:40:20.370003 | orchestrator | skipping: [testbed-node-5] 2025-05-31 19:40:20.451074 | orchestrator | skipping: [testbed-node-0] 2025-05-31 19:40:20.451270 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-05-31 19:40:20.451293 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-05-31 19:40:20.451629 | orchestrator | skipping: [testbed-node-1] 2025-05-31 19:40:20.452005 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-05-31 19:40:20.452573 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-05-31 19:40:20.453147 | orchestrator | skipping: [testbed-node-2] 2025-05-31 19:40:20.453562 | orchestrator | 2025-05-31 19:40:20.454226 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-05-31 19:40:20.454423 | orchestrator | Saturday 31 May 2025 19:40:20 +0000 (0:00:00.348) 0:04:17.690 ********** 2025-05-31 19:40:20.823715 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 19:40:20.824112 | orchestrator | 2025-05-31 19:40:20.824915 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-05-31 19:40:20.826089 | orchestrator | Saturday 31 May 2025 19:40:20 +0000 (0:00:00.371) 0:04:18.062 ********** 2025-05-31 19:40:20.912320 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-05-31 19:40:20.912582 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-05-31 19:40:20.945723 | orchestrator | skipping: [testbed-manager] 2025-05-31 19:40:20.945864 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-05-31 19:40:20.981239 | orchestrator | skipping: [testbed-node-3] 2025-05-31 19:40:20.981500 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-05-31 19:40:21.015740 | orchestrator | skipping: [testbed-node-4] 2025-05-31 19:40:21.016331 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-05-31 19:40:21.050125 | orchestrator | skipping: [testbed-node-5] 2025-05-31 19:40:21.109675 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-05-31 19:40:21.109864 | orchestrator | skipping: [testbed-node-0] 2025-05-31 19:40:21.112197 | orchestrator | skipping: [testbed-node-1] 2025-05-31 19:40:21.112689 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-05-31 19:40:21.113871 | orchestrator | skipping: [testbed-node-2] 2025-05-31 19:40:21.114598 | orchestrator | 2025-05-31 19:40:21.115602 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-05-31 19:40:21.116323 | orchestrator | Saturday 31 May 2025 19:40:21 +0000 (0:00:00.285) 0:04:18.347 ********** 2025-05-31 19:40:21.578497 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 19:40:21.578726 | orchestrator | 2025-05-31 19:40:21.578933 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-05-31 19:40:21.580022 | orchestrator | Saturday 31 May 2025 19:40:21 +0000 (0:00:00.468) 0:04:18.816 ********** 2025-05-31 19:40:56.518647 | orchestrator | changed: [testbed-manager] 2025-05-31 19:40:56.518751 | orchestrator | changed: [testbed-node-3] 2025-05-31 19:40:56.518762 | orchestrator | changed: [testbed-node-4] 2025-05-31 19:40:56.518770 | orchestrator | changed: [testbed-node-2] 2025-05-31 19:40:56.518776 | orchestrator | changed: [testbed-node-5] 2025-05-31 19:40:56.519734 | orchestrator | changed: [testbed-node-1] 2025-05-31 19:40:56.521722 | orchestrator | changed: [testbed-node-0] 2025-05-31 19:40:56.525003 | orchestrator | 2025-05-31 19:40:56.525096 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-05-31 19:40:56.525116 | orchestrator | Saturday 31 May 2025 19:40:56 +0000 (0:00:34.936) 0:04:53.753 ********** 2025-05-31 19:41:05.085363 | orchestrator | changed: [testbed-manager] 2025-05-31 19:41:05.085485 | orchestrator | changed: [testbed-node-3] 2025-05-31 19:41:05.086346 | orchestrator | changed: [testbed-node-4] 2025-05-31 19:41:05.087315 | orchestrator | changed: [testbed-node-5] 2025-05-31 19:41:05.088069 | orchestrator | changed: [testbed-node-2] 2025-05-31 19:41:05.089237 | orchestrator | changed: [testbed-node-1] 2025-05-31 19:41:05.091277 | orchestrator | changed: [testbed-node-0] 2025-05-31 19:41:05.091722 | orchestrator | 2025-05-31 19:41:05.092732 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-05-31 19:41:05.093089 | orchestrator | Saturday 31 May 2025 19:41:05 +0000 (0:00:08.567) 0:05:02.320 ********** 2025-05-31 19:41:13.000046 | orchestrator | changed: [testbed-manager] 2025-05-31 19:41:13.000288 | orchestrator | changed: [testbed-node-3] 2025-05-31 19:41:13.001196 | orchestrator | changed: [testbed-node-4] 2025-05-31 19:41:13.001356 | orchestrator | changed: [testbed-node-5] 2025-05-31 19:41:13.002121 | orchestrator | changed: [testbed-node-2] 2025-05-31 19:41:13.004122 | orchestrator | changed: [testbed-node-1] 2025-05-31 19:41:13.004937 | orchestrator | changed: [testbed-node-0] 2025-05-31 19:41:13.005737 | orchestrator | 2025-05-31 19:41:13.006953 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-05-31 19:41:13.007995 | orchestrator | Saturday 31 May 2025 19:41:12 +0000 (0:00:07.915) 0:05:10.236 ********** 2025-05-31 19:41:14.872236 | orchestrator | ok: [testbed-manager] 2025-05-31 19:41:14.872715 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:41:14.875246 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:41:14.875311 | orchestrator | ok: [testbed-node-2] 2025-05-31 19:41:14.875892 | orchestrator | ok: [testbed-node-1] 2025-05-31 19:41:14.877028 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:41:14.877882 | orchestrator | ok: [testbed-node-0] 2025-05-31 19:41:14.878458 | orchestrator | 2025-05-31 19:41:14.879727 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-05-31 19:41:14.880647 | orchestrator | Saturday 31 May 2025 19:41:14 +0000 (0:00:01.871) 0:05:12.108 ********** 2025-05-31 19:41:21.429215 | orchestrator | changed: [testbed-manager] 2025-05-31 19:41:21.430417 | orchestrator | changed: [testbed-node-4] 2025-05-31 19:41:21.430466 | orchestrator | changed: [testbed-node-3] 2025-05-31 19:41:21.431029 | orchestrator | changed: [testbed-node-5] 2025-05-31 19:41:21.432111 | orchestrator | changed: [testbed-node-1] 2025-05-31 19:41:21.433583 | orchestrator | changed: [testbed-node-2] 2025-05-31 19:41:21.434071 | orchestrator | changed: [testbed-node-0] 2025-05-31 19:41:21.435407 | orchestrator | 2025-05-31 19:41:21.435452 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-05-31 19:41:21.435889 | orchestrator | Saturday 31 May 2025 19:41:21 +0000 (0:00:06.556) 0:05:18.665 ********** 2025-05-31 19:41:21.865526 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 19:41:21.865639 | orchestrator | 2025-05-31 19:41:21.865984 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-05-31 19:41:21.867038 | orchestrator | Saturday 31 May 2025 19:41:21 +0000 (0:00:00.438) 0:05:19.104 ********** 2025-05-31 19:41:22.574453 | orchestrator | changed: [testbed-manager] 2025-05-31 19:41:22.574811 | orchestrator | changed: [testbed-node-4] 2025-05-31 19:41:22.578739 | orchestrator | changed: [testbed-node-3] 2025-05-31 19:41:22.579154 | orchestrator | changed: [testbed-node-5] 2025-05-31 19:41:22.580044 | orchestrator | changed: [testbed-node-1] 2025-05-31 19:41:22.580781 | orchestrator | changed: [testbed-node-0] 2025-05-31 19:41:22.581591 | orchestrator | changed: [testbed-node-2] 2025-05-31 19:41:22.581968 | orchestrator | 2025-05-31 19:41:22.582584 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-05-31 19:41:22.583084 | orchestrator | Saturday 31 May 2025 19:41:22 +0000 (0:00:00.707) 0:05:19.811 ********** 2025-05-31 19:41:24.404265 | orchestrator | ok: [testbed-manager] 2025-05-31 19:41:24.404434 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:41:24.405160 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:41:24.405870 | orchestrator | ok: [testbed-node-1] 2025-05-31 19:41:24.406995 | orchestrator | ok: [testbed-node-2] 2025-05-31 19:41:24.407266 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:41:24.407898 | orchestrator | ok: [testbed-node-0] 2025-05-31 19:41:24.408550 | orchestrator | 2025-05-31 19:41:24.408944 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-05-31 19:41:24.409594 | orchestrator | Saturday 31 May 2025 19:41:24 +0000 (0:00:01.828) 0:05:21.640 ********** 2025-05-31 19:41:25.199815 | orchestrator | changed: [testbed-manager] 2025-05-31 19:41:25.200145 | orchestrator | changed: [testbed-node-3] 2025-05-31 19:41:25.200747 | orchestrator | changed: [testbed-node-5] 2025-05-31 19:41:25.201572 | orchestrator | changed: [testbed-node-4] 2025-05-31 19:41:25.202751 | orchestrator | changed: [testbed-node-1] 2025-05-31 19:41:25.203557 | orchestrator | changed: [testbed-node-0] 2025-05-31 19:41:25.204434 | orchestrator | changed: [testbed-node-2] 2025-05-31 19:41:25.205236 | orchestrator | 2025-05-31 19:41:25.205919 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-05-31 19:41:25.206669 | orchestrator | Saturday 31 May 2025 19:41:25 +0000 (0:00:00.794) 0:05:22.434 ********** 2025-05-31 19:41:25.296134 | orchestrator | skipping: [testbed-manager] 2025-05-31 19:41:25.326891 | orchestrator | skipping: [testbed-node-3] 2025-05-31 19:41:25.376951 | orchestrator | skipping: [testbed-node-4] 2025-05-31 19:41:25.412968 | orchestrator | skipping: [testbed-node-5] 2025-05-31 19:41:25.481053 | orchestrator | skipping: [testbed-node-0] 2025-05-31 19:41:25.481148 | orchestrator | skipping: [testbed-node-1] 2025-05-31 19:41:25.481161 | orchestrator | skipping: [testbed-node-2] 2025-05-31 19:41:25.481960 | orchestrator | 2025-05-31 19:41:25.482147 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-05-31 19:41:25.485238 | orchestrator | Saturday 31 May 2025 19:41:25 +0000 (0:00:00.281) 0:05:22.715 ********** 2025-05-31 19:41:25.559771 | orchestrator | skipping: [testbed-manager] 2025-05-31 19:41:25.593944 | orchestrator | skipping: [testbed-node-3] 2025-05-31 19:41:25.626886 | orchestrator | skipping: [testbed-node-4] 2025-05-31 19:41:25.659561 | orchestrator | skipping: [testbed-node-5] 2025-05-31 19:41:25.689884 | orchestrator | skipping: [testbed-node-0] 2025-05-31 19:41:25.896266 | orchestrator | skipping: [testbed-node-1] 2025-05-31 19:41:25.896679 | orchestrator | skipping: [testbed-node-2] 2025-05-31 19:41:25.897970 | orchestrator | 2025-05-31 19:41:25.901661 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-05-31 19:41:25.901711 | orchestrator | Saturday 31 May 2025 19:41:25 +0000 (0:00:00.418) 0:05:23.134 ********** 2025-05-31 19:41:26.002484 | orchestrator | ok: [testbed-manager] 2025-05-31 19:41:26.038871 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:41:26.069589 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:41:26.105446 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:41:26.172632 | orchestrator | ok: [testbed-node-0] 2025-05-31 19:41:26.172733 | orchestrator | ok: [testbed-node-1] 2025-05-31 19:41:26.172909 | orchestrator | ok: [testbed-node-2] 2025-05-31 19:41:26.173393 | orchestrator | 2025-05-31 19:41:26.173875 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-05-31 19:41:26.174488 | orchestrator | Saturday 31 May 2025 19:41:26 +0000 (0:00:00.277) 0:05:23.412 ********** 2025-05-31 19:41:26.267560 | orchestrator | skipping: [testbed-manager] 2025-05-31 19:41:26.308977 | orchestrator | skipping: [testbed-node-3] 2025-05-31 19:41:26.344799 | orchestrator | skipping: [testbed-node-4] 2025-05-31 19:41:26.386180 | orchestrator | skipping: [testbed-node-5] 2025-05-31 19:41:26.486341 | orchestrator | skipping: [testbed-node-0] 2025-05-31 19:41:26.486604 | orchestrator | skipping: [testbed-node-1] 2025-05-31 19:41:26.487285 | orchestrator | skipping: [testbed-node-2] 2025-05-31 19:41:26.488763 | orchestrator | 2025-05-31 19:41:26.488788 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-05-31 19:41:26.489875 | orchestrator | Saturday 31 May 2025 19:41:26 +0000 (0:00:00.313) 0:05:23.725 ********** 2025-05-31 19:41:26.586444 | orchestrator | ok: [testbed-manager] 2025-05-31 19:41:26.622085 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:41:26.658009 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:41:26.711005 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:41:26.789858 | orchestrator | ok: [testbed-node-0] 2025-05-31 19:41:26.791005 | orchestrator | ok: [testbed-node-1] 2025-05-31 19:41:26.791226 | orchestrator | ok: [testbed-node-2] 2025-05-31 19:41:26.792489 | orchestrator | 2025-05-31 19:41:26.793227 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2025-05-31 19:41:26.794118 | orchestrator | Saturday 31 May 2025 19:41:26 +0000 (0:00:00.302) 0:05:24.028 ********** 2025-05-31 19:41:26.859011 | orchestrator | ok: [testbed-manager] =>  2025-05-31 19:41:26.861373 | orchestrator |  docker_version: 5:27.5.1 2025-05-31 19:41:26.890575 | orchestrator | ok: [testbed-node-3] =>  2025-05-31 19:41:26.891056 | orchestrator |  docker_version: 5:27.5.1 2025-05-31 19:41:26.926845 | orchestrator | ok: [testbed-node-4] =>  2025-05-31 19:41:26.927981 | orchestrator |  docker_version: 5:27.5.1 2025-05-31 19:41:26.997400 | orchestrator | ok: [testbed-node-5] =>  2025-05-31 19:41:26.998201 | orchestrator |  docker_version: 5:27.5.1 2025-05-31 19:41:27.083604 | orchestrator | ok: [testbed-node-0] =>  2025-05-31 19:41:27.084041 | orchestrator |  docker_version: 5:27.5.1 2025-05-31 19:41:27.084804 | orchestrator | ok: [testbed-node-1] =>  2025-05-31 19:41:27.087388 | orchestrator |  docker_version: 5:27.5.1 2025-05-31 19:41:27.087470 | orchestrator | ok: [testbed-node-2] =>  2025-05-31 19:41:27.088880 | orchestrator |  docker_version: 5:27.5.1 2025-05-31 19:41:27.089606 | orchestrator | 2025-05-31 19:41:27.090304 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2025-05-31 19:41:27.090877 | orchestrator | Saturday 31 May 2025 19:41:27 +0000 (0:00:00.293) 0:05:24.322 ********** 2025-05-31 19:41:27.182082 | orchestrator | ok: [testbed-manager] =>  2025-05-31 19:41:27.182421 | orchestrator |  docker_cli_version: 5:27.5.1 2025-05-31 19:41:27.229753 | orchestrator | ok: [testbed-node-3] =>  2025-05-31 19:41:27.230127 | orchestrator |  docker_cli_version: 5:27.5.1 2025-05-31 19:41:27.365855 | orchestrator | ok: [testbed-node-4] =>  2025-05-31 19:41:27.366666 | orchestrator |  docker_cli_version: 5:27.5.1 2025-05-31 19:41:27.405604 | orchestrator | ok: [testbed-node-5] =>  2025-05-31 19:41:27.406242 | orchestrator |  docker_cli_version: 5:27.5.1 2025-05-31 19:41:27.474702 | orchestrator | ok: [testbed-node-0] =>  2025-05-31 19:41:27.475778 | orchestrator |  docker_cli_version: 5:27.5.1 2025-05-31 19:41:27.476343 | orchestrator | ok: [testbed-node-1] =>  2025-05-31 19:41:27.477230 | orchestrator |  docker_cli_version: 5:27.5.1 2025-05-31 19:41:27.478119 | orchestrator | ok: [testbed-node-2] =>  2025-05-31 19:41:27.479302 | orchestrator |  docker_cli_version: 5:27.5.1 2025-05-31 19:41:27.480356 | orchestrator | 2025-05-31 19:41:27.481241 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-05-31 19:41:27.483014 | orchestrator | Saturday 31 May 2025 19:41:27 +0000 (0:00:00.391) 0:05:24.713 ********** 2025-05-31 19:41:27.542616 | orchestrator | skipping: [testbed-manager] 2025-05-31 19:41:27.574327 | orchestrator | skipping: [testbed-node-3] 2025-05-31 19:41:27.606719 | orchestrator | skipping: [testbed-node-4] 2025-05-31 19:41:27.637296 | orchestrator | skipping: [testbed-node-5] 2025-05-31 19:41:27.667693 | orchestrator | skipping: [testbed-node-0] 2025-05-31 19:41:27.726228 | orchestrator | skipping: [testbed-node-1] 2025-05-31 19:41:27.726389 | orchestrator | skipping: [testbed-node-2] 2025-05-31 19:41:27.727488 | orchestrator | 2025-05-31 19:41:27.727729 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-05-31 19:41:27.728168 | orchestrator | Saturday 31 May 2025 19:41:27 +0000 (0:00:00.251) 0:05:24.965 ********** 2025-05-31 19:41:27.792518 | orchestrator | skipping: [testbed-manager] 2025-05-31 19:41:27.838176 | orchestrator | skipping: [testbed-node-3] 2025-05-31 19:41:27.873377 | orchestrator | skipping: [testbed-node-4] 2025-05-31 19:41:27.933311 | orchestrator | skipping: [testbed-node-5] 2025-05-31 19:41:27.985017 | orchestrator | skipping: [testbed-node-0] 2025-05-31 19:41:27.985903 | orchestrator | skipping: [testbed-node-1] 2025-05-31 19:41:27.985997 | orchestrator | skipping: [testbed-node-2] 2025-05-31 19:41:27.986893 | orchestrator | 2025-05-31 19:41:27.987561 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-05-31 19:41:27.987960 | orchestrator | Saturday 31 May 2025 19:41:27 +0000 (0:00:00.258) 0:05:25.223 ********** 2025-05-31 19:41:28.391896 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 19:41:28.392226 | orchestrator | 2025-05-31 19:41:28.393827 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-05-31 19:41:28.394573 | orchestrator | Saturday 31 May 2025 19:41:28 +0000 (0:00:00.406) 0:05:25.630 ********** 2025-05-31 19:41:29.411911 | orchestrator | ok: [testbed-manager] 2025-05-31 19:41:29.412041 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:41:29.413080 | orchestrator | ok: [testbed-node-1] 2025-05-31 19:41:29.414977 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:41:29.416714 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:41:29.418107 | orchestrator | ok: [testbed-node-2] 2025-05-31 19:41:29.419088 | orchestrator | ok: [testbed-node-0] 2025-05-31 19:41:29.420133 | orchestrator | 2025-05-31 19:41:29.421227 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-05-31 19:41:29.422076 | orchestrator | Saturday 31 May 2025 19:41:29 +0000 (0:00:01.017) 0:05:26.648 ********** 2025-05-31 19:41:32.171226 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:41:32.172239 | orchestrator | ok: [testbed-manager] 2025-05-31 19:41:32.172886 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:41:32.173739 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:41:32.176902 | orchestrator | ok: [testbed-node-1] 2025-05-31 19:41:32.176936 | orchestrator | ok: [testbed-node-2] 2025-05-31 19:41:32.176947 | orchestrator | ok: [testbed-node-0] 2025-05-31 19:41:32.176959 | orchestrator | 2025-05-31 19:41:32.176972 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-05-31 19:41:32.177440 | orchestrator | Saturday 31 May 2025 19:41:32 +0000 (0:00:02.761) 0:05:29.410 ********** 2025-05-31 19:41:32.254997 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-05-31 19:41:32.255453 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-05-31 19:41:32.258636 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-05-31 19:41:32.321203 | orchestrator | skipping: [testbed-manager] 2025-05-31 19:41:32.321926 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-05-31 19:41:32.325718 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-05-31 19:41:32.400466 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-05-31 19:41:32.402011 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-05-31 19:41:32.402841 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-05-31 19:41:32.403885 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-05-31 19:41:32.473772 | orchestrator | skipping: [testbed-node-3] 2025-05-31 19:41:32.474062 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-05-31 19:41:32.474882 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-05-31 19:41:32.476033 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-05-31 19:41:32.699541 | orchestrator | skipping: [testbed-node-4] 2025-05-31 19:41:32.700817 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-05-31 19:41:32.702344 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-05-31 19:41:32.702983 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-05-31 19:41:32.768210 | orchestrator | skipping: [testbed-node-5] 2025-05-31 19:41:32.768876 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-05-31 19:41:32.769154 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-05-31 19:41:32.769672 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-05-31 19:41:32.897839 | orchestrator | skipping: [testbed-node-0] 2025-05-31 19:41:32.898524 | orchestrator | skipping: [testbed-node-1] 2025-05-31 19:41:32.899292 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-05-31 19:41:32.900034 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-05-31 19:41:32.900587 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-05-31 19:41:32.901216 | orchestrator | skipping: [testbed-node-2] 2025-05-31 19:41:32.904245 | orchestrator | 2025-05-31 19:41:32.904841 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-05-31 19:41:32.905680 | orchestrator | Saturday 31 May 2025 19:41:32 +0000 (0:00:00.725) 0:05:30.135 ********** 2025-05-31 19:41:43.841970 | orchestrator | ok: [testbed-manager] 2025-05-31 19:41:43.842167 | orchestrator | changed: [testbed-node-4] 2025-05-31 19:41:43.842261 | orchestrator | changed: [testbed-node-5] 2025-05-31 19:41:43.843103 | orchestrator | changed: [testbed-node-2] 2025-05-31 19:41:43.843393 | orchestrator | changed: [testbed-node-1] 2025-05-31 19:41:43.843786 | orchestrator | changed: [testbed-node-0] 2025-05-31 19:41:43.844561 | orchestrator | changed: [testbed-node-3] 2025-05-31 19:41:43.844949 | orchestrator | 2025-05-31 19:41:43.846426 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-05-31 19:41:43.846654 | orchestrator | Saturday 31 May 2025 19:41:43 +0000 (0:00:10.943) 0:05:41.079 ********** 2025-05-31 19:41:44.954382 | orchestrator | changed: [testbed-node-4] 2025-05-31 19:41:44.954669 | orchestrator | changed: [testbed-node-3] 2025-05-31 19:41:44.956095 | orchestrator | ok: [testbed-manager] 2025-05-31 19:41:44.957782 | orchestrator | changed: [testbed-node-5] 2025-05-31 19:41:44.959417 | orchestrator | changed: [testbed-node-0] 2025-05-31 19:41:44.960692 | orchestrator | changed: [testbed-node-1] 2025-05-31 19:41:44.961218 | orchestrator | changed: [testbed-node-2] 2025-05-31 19:41:44.962373 | orchestrator | 2025-05-31 19:41:44.963216 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-05-31 19:41:44.963807 | orchestrator | Saturday 31 May 2025 19:41:44 +0000 (0:00:01.111) 0:05:42.190 ********** 2025-05-31 19:41:52.715293 | orchestrator | ok: [testbed-manager] 2025-05-31 19:41:52.715569 | orchestrator | changed: [testbed-node-3] 2025-05-31 19:41:52.716127 | orchestrator | changed: [testbed-node-4] 2025-05-31 19:41:52.717805 | orchestrator | changed: [testbed-node-5] 2025-05-31 19:41:52.719687 | orchestrator | changed: [testbed-node-1] 2025-05-31 19:41:52.720686 | orchestrator | changed: [testbed-node-2] 2025-05-31 19:41:52.721865 | orchestrator | changed: [testbed-node-0] 2025-05-31 19:41:52.722792 | orchestrator | 2025-05-31 19:41:52.723815 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-05-31 19:41:52.724627 | orchestrator | Saturday 31 May 2025 19:41:52 +0000 (0:00:07.759) 0:05:49.950 ********** 2025-05-31 19:41:55.887311 | orchestrator | changed: [testbed-manager] 2025-05-31 19:41:55.887447 | orchestrator | changed: [testbed-node-3] 2025-05-31 19:41:55.888213 | orchestrator | changed: [testbed-node-5] 2025-05-31 19:41:55.889311 | orchestrator | changed: [testbed-node-4] 2025-05-31 19:41:55.890552 | orchestrator | changed: [testbed-node-1] 2025-05-31 19:41:55.891693 | orchestrator | changed: [testbed-node-2] 2025-05-31 19:41:55.893037 | orchestrator | changed: [testbed-node-0] 2025-05-31 19:41:55.893798 | orchestrator | 2025-05-31 19:41:55.895405 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-05-31 19:41:55.896274 | orchestrator | Saturday 31 May 2025 19:41:55 +0000 (0:00:03.173) 0:05:53.124 ********** 2025-05-31 19:41:57.415050 | orchestrator | ok: [testbed-manager] 2025-05-31 19:41:57.415159 | orchestrator | changed: [testbed-node-3] 2025-05-31 19:41:57.416668 | orchestrator | changed: [testbed-node-4] 2025-05-31 19:41:57.417040 | orchestrator | changed: [testbed-node-5] 2025-05-31 19:41:57.419711 | orchestrator | changed: [testbed-node-0] 2025-05-31 19:41:57.421082 | orchestrator | changed: [testbed-node-1] 2025-05-31 19:41:57.422229 | orchestrator | changed: [testbed-node-2] 2025-05-31 19:41:57.423582 | orchestrator | 2025-05-31 19:41:57.424473 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-05-31 19:41:57.425540 | orchestrator | Saturday 31 May 2025 19:41:57 +0000 (0:00:01.526) 0:05:54.650 ********** 2025-05-31 19:41:58.723033 | orchestrator | ok: [testbed-manager] 2025-05-31 19:41:58.723151 | orchestrator | changed: [testbed-node-3] 2025-05-31 19:41:58.725007 | orchestrator | changed: [testbed-node-4] 2025-05-31 19:41:58.726653 | orchestrator | changed: [testbed-node-5] 2025-05-31 19:41:58.726762 | orchestrator | changed: [testbed-node-0] 2025-05-31 19:41:58.726861 | orchestrator | changed: [testbed-node-1] 2025-05-31 19:41:58.727420 | orchestrator | changed: [testbed-node-2] 2025-05-31 19:41:58.728596 | orchestrator | 2025-05-31 19:41:58.728941 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-05-31 19:41:58.729516 | orchestrator | Saturday 31 May 2025 19:41:58 +0000 (0:00:01.308) 0:05:55.958 ********** 2025-05-31 19:41:58.926435 | orchestrator | skipping: [testbed-node-3] 2025-05-31 19:41:58.997154 | orchestrator | skipping: [testbed-node-4] 2025-05-31 19:41:59.060600 | orchestrator | skipping: [testbed-node-5] 2025-05-31 19:41:59.124586 | orchestrator | skipping: [testbed-node-0] 2025-05-31 19:41:59.322474 | orchestrator | skipping: [testbed-node-1] 2025-05-31 19:41:59.322745 | orchestrator | skipping: [testbed-node-2] 2025-05-31 19:41:59.323512 | orchestrator | changed: [testbed-manager] 2025-05-31 19:41:59.324109 | orchestrator | 2025-05-31 19:41:59.324657 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-05-31 19:41:59.325307 | orchestrator | Saturday 31 May 2025 19:41:59 +0000 (0:00:00.601) 0:05:56.560 ********** 2025-05-31 19:42:09.562119 | orchestrator | ok: [testbed-manager] 2025-05-31 19:42:09.562280 | orchestrator | changed: [testbed-node-4] 2025-05-31 19:42:09.562426 | orchestrator | changed: [testbed-node-5] 2025-05-31 19:42:09.564103 | orchestrator | changed: [testbed-node-2] 2025-05-31 19:42:09.564704 | orchestrator | changed: [testbed-node-3] 2025-05-31 19:42:09.565320 | orchestrator | changed: [testbed-node-1] 2025-05-31 19:42:09.566149 | orchestrator | changed: [testbed-node-0] 2025-05-31 19:42:09.566364 | orchestrator | 2025-05-31 19:42:09.566946 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-05-31 19:42:09.567731 | orchestrator | Saturday 31 May 2025 19:42:09 +0000 (0:00:10.234) 0:06:06.794 ********** 2025-05-31 19:42:10.059602 | orchestrator | changed: [testbed-manager] 2025-05-31 19:42:10.641749 | orchestrator | changed: [testbed-node-3] 2025-05-31 19:42:10.642153 | orchestrator | changed: [testbed-node-4] 2025-05-31 19:42:10.644222 | orchestrator | changed: [testbed-node-5] 2025-05-31 19:42:10.644801 | orchestrator | changed: [testbed-node-0] 2025-05-31 19:42:10.648320 | orchestrator | changed: [testbed-node-1] 2025-05-31 19:42:10.648892 | orchestrator | changed: [testbed-node-2] 2025-05-31 19:42:10.649242 | orchestrator | 2025-05-31 19:42:10.649885 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-05-31 19:42:10.651080 | orchestrator | Saturday 31 May 2025 19:42:10 +0000 (0:00:01.081) 0:06:07.875 ********** 2025-05-31 19:42:19.971300 | orchestrator | ok: [testbed-manager] 2025-05-31 19:42:19.971557 | orchestrator | changed: [testbed-node-3] 2025-05-31 19:42:19.972298 | orchestrator | changed: [testbed-node-4] 2025-05-31 19:42:19.973309 | orchestrator | changed: [testbed-node-5] 2025-05-31 19:42:19.975298 | orchestrator | changed: [testbed-node-2] 2025-05-31 19:42:19.975945 | orchestrator | changed: [testbed-node-1] 2025-05-31 19:42:19.977074 | orchestrator | changed: [testbed-node-0] 2025-05-31 19:42:19.977736 | orchestrator | 2025-05-31 19:42:19.978553 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-05-31 19:42:19.980064 | orchestrator | Saturday 31 May 2025 19:42:19 +0000 (0:00:09.333) 0:06:17.208 ********** 2025-05-31 19:42:31.579421 | orchestrator | ok: [testbed-manager] 2025-05-31 19:42:31.579615 | orchestrator | changed: [testbed-node-4] 2025-05-31 19:42:31.579635 | orchestrator | changed: [testbed-node-5] 2025-05-31 19:42:31.581619 | orchestrator | changed: [testbed-node-3] 2025-05-31 19:42:31.582540 | orchestrator | changed: [testbed-node-2] 2025-05-31 19:42:31.584861 | orchestrator | changed: [testbed-node-1] 2025-05-31 19:42:31.585347 | orchestrator | changed: [testbed-node-0] 2025-05-31 19:42:31.586860 | orchestrator | 2025-05-31 19:42:31.588033 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-05-31 19:42:31.588673 | orchestrator | Saturday 31 May 2025 19:42:31 +0000 (0:00:11.603) 0:06:28.812 ********** 2025-05-31 19:42:31.979347 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-05-31 19:42:32.886196 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-05-31 19:42:32.888823 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-05-31 19:42:32.888883 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-05-31 19:42:32.888897 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-05-31 19:42:32.888908 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-05-31 19:42:32.889831 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-05-31 19:42:32.890742 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-05-31 19:42:32.891682 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-05-31 19:42:32.892441 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-05-31 19:42:32.892999 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-05-31 19:42:32.893701 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-05-31 19:42:32.894156 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-05-31 19:42:32.894798 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-05-31 19:42:32.895440 | orchestrator | 2025-05-31 19:42:32.896074 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-05-31 19:42:32.896448 | orchestrator | Saturday 31 May 2025 19:42:32 +0000 (0:00:01.306) 0:06:30.118 ********** 2025-05-31 19:42:33.030524 | orchestrator | skipping: [testbed-manager] 2025-05-31 19:42:33.092613 | orchestrator | skipping: [testbed-node-3] 2025-05-31 19:42:33.159532 | orchestrator | skipping: [testbed-node-4] 2025-05-31 19:42:33.218930 | orchestrator | skipping: [testbed-node-5] 2025-05-31 19:42:33.280807 | orchestrator | skipping: [testbed-node-0] 2025-05-31 19:42:33.396147 | orchestrator | skipping: [testbed-node-1] 2025-05-31 19:42:33.396402 | orchestrator | skipping: [testbed-node-2] 2025-05-31 19:42:33.396427 | orchestrator | 2025-05-31 19:42:33.396986 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-05-31 19:42:33.397564 | orchestrator | Saturday 31 May 2025 19:42:33 +0000 (0:00:00.515) 0:06:30.634 ********** 2025-05-31 19:42:37.397955 | orchestrator | ok: [testbed-manager] 2025-05-31 19:42:37.398148 | orchestrator | changed: [testbed-node-5] 2025-05-31 19:42:37.398278 | orchestrator | changed: [testbed-node-3] 2025-05-31 19:42:37.399463 | orchestrator | changed: [testbed-node-1] 2025-05-31 19:42:37.400982 | orchestrator | changed: [testbed-node-4] 2025-05-31 19:42:37.401740 | orchestrator | changed: [testbed-node-2] 2025-05-31 19:42:37.402601 | orchestrator | changed: [testbed-node-0] 2025-05-31 19:42:37.403461 | orchestrator | 2025-05-31 19:42:37.404077 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-05-31 19:42:37.404400 | orchestrator | Saturday 31 May 2025 19:42:37 +0000 (0:00:03.996) 0:06:34.630 ********** 2025-05-31 19:42:37.527936 | orchestrator | skipping: [testbed-manager] 2025-05-31 19:42:37.603640 | orchestrator | skipping: [testbed-node-3] 2025-05-31 19:42:37.666082 | orchestrator | skipping: [testbed-node-4] 2025-05-31 19:42:37.760876 | orchestrator | skipping: [testbed-node-5] 2025-05-31 19:42:37.830260 | orchestrator | skipping: [testbed-node-0] 2025-05-31 19:42:37.933188 | orchestrator | skipping: [testbed-node-1] 2025-05-31 19:42:37.933688 | orchestrator | skipping: [testbed-node-2] 2025-05-31 19:42:37.934754 | orchestrator | 2025-05-31 19:42:37.935514 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-05-31 19:42:37.936374 | orchestrator | Saturday 31 May 2025 19:42:37 +0000 (0:00:00.539) 0:06:35.170 ********** 2025-05-31 19:42:38.019309 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-05-31 19:42:38.019576 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-05-31 19:42:38.096356 | orchestrator | skipping: [testbed-manager] 2025-05-31 19:42:38.096777 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-05-31 19:42:38.098134 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-05-31 19:42:38.172203 | orchestrator | skipping: [testbed-node-3] 2025-05-31 19:42:38.172799 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-05-31 19:42:38.173504 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-05-31 19:42:38.252256 | orchestrator | skipping: [testbed-node-4] 2025-05-31 19:42:38.252763 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-05-31 19:42:38.253373 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-05-31 19:42:38.320653 | orchestrator | skipping: [testbed-node-5] 2025-05-31 19:42:38.321660 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-05-31 19:42:38.322609 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-05-31 19:42:38.389106 | orchestrator | skipping: [testbed-node-0] 2025-05-31 19:42:38.390147 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-05-31 19:42:38.391263 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-05-31 19:42:38.510594 | orchestrator | skipping: [testbed-node-1] 2025-05-31 19:42:38.511843 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-05-31 19:42:38.515333 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-05-31 19:42:38.517344 | orchestrator | skipping: [testbed-node-2] 2025-05-31 19:42:38.519001 | orchestrator | 2025-05-31 19:42:38.519039 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-05-31 19:42:38.519674 | orchestrator | Saturday 31 May 2025 19:42:38 +0000 (0:00:00.576) 0:06:35.747 ********** 2025-05-31 19:42:38.668996 | orchestrator | skipping: [testbed-manager] 2025-05-31 19:42:38.738196 | orchestrator | skipping: [testbed-node-3] 2025-05-31 19:42:38.803275 | orchestrator | skipping: [testbed-node-4] 2025-05-31 19:42:38.866206 | orchestrator | skipping: [testbed-node-5] 2025-05-31 19:42:38.937030 | orchestrator | skipping: [testbed-node-0] 2025-05-31 19:42:39.033717 | orchestrator | skipping: [testbed-node-1] 2025-05-31 19:42:39.035686 | orchestrator | skipping: [testbed-node-2] 2025-05-31 19:42:39.038782 | orchestrator | 2025-05-31 19:42:39.038841 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-05-31 19:42:39.038855 | orchestrator | Saturday 31 May 2025 19:42:39 +0000 (0:00:00.522) 0:06:36.269 ********** 2025-05-31 19:42:39.173864 | orchestrator | skipping: [testbed-manager] 2025-05-31 19:42:39.237537 | orchestrator | skipping: [testbed-node-3] 2025-05-31 19:42:39.301015 | orchestrator | skipping: [testbed-node-4] 2025-05-31 19:42:39.375358 | orchestrator | skipping: [testbed-node-5] 2025-05-31 19:42:39.439285 | orchestrator | skipping: [testbed-node-0] 2025-05-31 19:42:39.526315 | orchestrator | skipping: [testbed-node-1] 2025-05-31 19:42:39.530082 | orchestrator | skipping: [testbed-node-2] 2025-05-31 19:42:39.530154 | orchestrator | 2025-05-31 19:42:39.531262 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-05-31 19:42:39.531896 | orchestrator | Saturday 31 May 2025 19:42:39 +0000 (0:00:00.493) 0:06:36.763 ********** 2025-05-31 19:42:39.681928 | orchestrator | skipping: [testbed-manager] 2025-05-31 19:42:39.768002 | orchestrator | skipping: [testbed-node-3] 2025-05-31 19:42:39.843087 | orchestrator | skipping: [testbed-node-4] 2025-05-31 19:42:40.114255 | orchestrator | skipping: [testbed-node-5] 2025-05-31 19:42:40.179384 | orchestrator | skipping: [testbed-node-0] 2025-05-31 19:42:40.303687 | orchestrator | skipping: [testbed-node-1] 2025-05-31 19:42:40.303767 | orchestrator | skipping: [testbed-node-2] 2025-05-31 19:42:40.304711 | orchestrator | 2025-05-31 19:42:40.307663 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-05-31 19:42:40.307697 | orchestrator | Saturday 31 May 2025 19:42:40 +0000 (0:00:00.776) 0:06:37.540 ********** 2025-05-31 19:42:42.184971 | orchestrator | ok: [testbed-manager] 2025-05-31 19:42:42.185782 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:42:42.186859 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:42:42.189432 | orchestrator | ok: [testbed-node-1] 2025-05-31 19:42:42.189531 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:42:42.190156 | orchestrator | ok: [testbed-node-0] 2025-05-31 19:42:42.190946 | orchestrator | ok: [testbed-node-2] 2025-05-31 19:42:42.191542 | orchestrator | 2025-05-31 19:42:42.192160 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-05-31 19:42:42.192686 | orchestrator | Saturday 31 May 2025 19:42:42 +0000 (0:00:01.881) 0:06:39.421 ********** 2025-05-31 19:42:43.079573 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 19:42:43.079974 | orchestrator | 2025-05-31 19:42:43.081181 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-05-31 19:42:43.082231 | orchestrator | Saturday 31 May 2025 19:42:43 +0000 (0:00:00.894) 0:06:40.315 ********** 2025-05-31 19:42:43.518792 | orchestrator | ok: [testbed-manager] 2025-05-31 19:42:43.962909 | orchestrator | changed: [testbed-node-3] 2025-05-31 19:42:43.963090 | orchestrator | changed: [testbed-node-4] 2025-05-31 19:42:43.964091 | orchestrator | changed: [testbed-node-5] 2025-05-31 19:42:43.964819 | orchestrator | changed: [testbed-node-0] 2025-05-31 19:42:43.966199 | orchestrator | changed: [testbed-node-1] 2025-05-31 19:42:43.966811 | orchestrator | changed: [testbed-node-2] 2025-05-31 19:42:43.967906 | orchestrator | 2025-05-31 19:42:43.968278 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-05-31 19:42:43.969048 | orchestrator | Saturday 31 May 2025 19:42:43 +0000 (0:00:00.883) 0:06:41.199 ********** 2025-05-31 19:42:44.386098 | orchestrator | ok: [testbed-manager] 2025-05-31 19:42:44.524167 | orchestrator | changed: [testbed-node-3] 2025-05-31 19:42:45.062073 | orchestrator | changed: [testbed-node-4] 2025-05-31 19:42:45.062147 | orchestrator | changed: [testbed-node-5] 2025-05-31 19:42:45.063072 | orchestrator | changed: [testbed-node-0] 2025-05-31 19:42:45.063414 | orchestrator | changed: [testbed-node-1] 2025-05-31 19:42:45.063997 | orchestrator | changed: [testbed-node-2] 2025-05-31 19:42:45.064402 | orchestrator | 2025-05-31 19:42:45.065045 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-05-31 19:42:45.065739 | orchestrator | Saturday 31 May 2025 19:42:45 +0000 (0:00:01.099) 0:06:42.298 ********** 2025-05-31 19:42:46.490653 | orchestrator | ok: [testbed-manager] 2025-05-31 19:42:46.491431 | orchestrator | changed: [testbed-node-3] 2025-05-31 19:42:46.492496 | orchestrator | changed: [testbed-node-4] 2025-05-31 19:42:46.494064 | orchestrator | changed: [testbed-node-5] 2025-05-31 19:42:46.494993 | orchestrator | changed: [testbed-node-0] 2025-05-31 19:42:46.496439 | orchestrator | changed: [testbed-node-1] 2025-05-31 19:42:46.496698 | orchestrator | changed: [testbed-node-2] 2025-05-31 19:42:46.497896 | orchestrator | 2025-05-31 19:42:46.498562 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-05-31 19:42:46.499563 | orchestrator | Saturday 31 May 2025 19:42:46 +0000 (0:00:01.429) 0:06:43.728 ********** 2025-05-31 19:42:46.630745 | orchestrator | skipping: [testbed-manager] 2025-05-31 19:42:48.007210 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:42:48.008275 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:42:48.009028 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:42:48.010181 | orchestrator | ok: [testbed-node-0] 2025-05-31 19:42:48.010548 | orchestrator | ok: [testbed-node-1] 2025-05-31 19:42:48.011121 | orchestrator | ok: [testbed-node-2] 2025-05-31 19:42:48.011825 | orchestrator | 2025-05-31 19:42:48.012865 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-05-31 19:42:48.013278 | orchestrator | Saturday 31 May 2025 19:42:47 +0000 (0:00:01.511) 0:06:45.239 ********** 2025-05-31 19:42:49.375577 | orchestrator | ok: [testbed-manager] 2025-05-31 19:42:49.375856 | orchestrator | changed: [testbed-node-3] 2025-05-31 19:42:49.376671 | orchestrator | changed: [testbed-node-4] 2025-05-31 19:42:49.377445 | orchestrator | changed: [testbed-node-5] 2025-05-31 19:42:49.377720 | orchestrator | changed: [testbed-node-1] 2025-05-31 19:42:49.379402 | orchestrator | changed: [testbed-node-0] 2025-05-31 19:42:49.379606 | orchestrator | changed: [testbed-node-2] 2025-05-31 19:42:49.380323 | orchestrator | 2025-05-31 19:42:49.381325 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-05-31 19:42:49.382162 | orchestrator | Saturday 31 May 2025 19:42:49 +0000 (0:00:01.371) 0:06:46.610 ********** 2025-05-31 19:42:50.775734 | orchestrator | changed: [testbed-manager] 2025-05-31 19:42:50.775840 | orchestrator | changed: [testbed-node-3] 2025-05-31 19:42:50.776120 | orchestrator | changed: [testbed-node-4] 2025-05-31 19:42:50.776932 | orchestrator | changed: [testbed-node-5] 2025-05-31 19:42:50.778159 | orchestrator | changed: [testbed-node-1] 2025-05-31 19:42:50.779796 | orchestrator | changed: [testbed-node-0] 2025-05-31 19:42:50.780380 | orchestrator | changed: [testbed-node-2] 2025-05-31 19:42:50.781004 | orchestrator | 2025-05-31 19:42:50.781619 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-05-31 19:42:50.782710 | orchestrator | Saturday 31 May 2025 19:42:50 +0000 (0:00:01.401) 0:06:48.012 ********** 2025-05-31 19:42:51.813543 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 19:42:51.815155 | orchestrator | 2025-05-31 19:42:51.816413 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-05-31 19:42:51.818445 | orchestrator | Saturday 31 May 2025 19:42:51 +0000 (0:00:01.040) 0:06:49.052 ********** 2025-05-31 19:42:53.171043 | orchestrator | ok: [testbed-manager] 2025-05-31 19:42:53.171145 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:42:53.172018 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:42:53.172267 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:42:53.173269 | orchestrator | ok: [testbed-node-1] 2025-05-31 19:42:53.174137 | orchestrator | ok: [testbed-node-0] 2025-05-31 19:42:53.174756 | orchestrator | ok: [testbed-node-2] 2025-05-31 19:42:53.175772 | orchestrator | 2025-05-31 19:42:53.175905 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-05-31 19:42:53.176766 | orchestrator | Saturday 31 May 2025 19:42:53 +0000 (0:00:01.354) 0:06:50.407 ********** 2025-05-31 19:42:54.447018 | orchestrator | ok: [testbed-manager] 2025-05-31 19:42:54.447224 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:42:54.448045 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:42:54.449766 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:42:54.450740 | orchestrator | ok: [testbed-node-0] 2025-05-31 19:42:54.450816 | orchestrator | ok: [testbed-node-1] 2025-05-31 19:42:54.451867 | orchestrator | ok: [testbed-node-2] 2025-05-31 19:42:54.452024 | orchestrator | 2025-05-31 19:42:54.453053 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-05-31 19:42:54.453153 | orchestrator | Saturday 31 May 2025 19:42:54 +0000 (0:00:01.274) 0:06:51.682 ********** 2025-05-31 19:42:55.772290 | orchestrator | ok: [testbed-manager] 2025-05-31 19:42:55.772410 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:42:55.772608 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:42:55.773620 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:42:55.774217 | orchestrator | ok: [testbed-node-0] 2025-05-31 19:42:55.775583 | orchestrator | ok: [testbed-node-1] 2025-05-31 19:42:55.776132 | orchestrator | ok: [testbed-node-2] 2025-05-31 19:42:55.776755 | orchestrator | 2025-05-31 19:42:55.777345 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-05-31 19:42:55.777878 | orchestrator | Saturday 31 May 2025 19:42:55 +0000 (0:00:01.325) 0:06:53.007 ********** 2025-05-31 19:42:56.929897 | orchestrator | ok: [testbed-manager] 2025-05-31 19:42:56.930163 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:42:56.930945 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:42:56.931832 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:42:56.933318 | orchestrator | ok: [testbed-node-0] 2025-05-31 19:42:56.934308 | orchestrator | ok: [testbed-node-1] 2025-05-31 19:42:56.934954 | orchestrator | ok: [testbed-node-2] 2025-05-31 19:42:56.935872 | orchestrator | 2025-05-31 19:42:56.936387 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-05-31 19:42:56.937172 | orchestrator | Saturday 31 May 2025 19:42:56 +0000 (0:00:01.156) 0:06:54.164 ********** 2025-05-31 19:42:58.130602 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 19:42:58.131582 | orchestrator | 2025-05-31 19:42:58.131633 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-31 19:42:58.131866 | orchestrator | Saturday 31 May 2025 19:42:57 +0000 (0:00:00.877) 0:06:55.042 ********** 2025-05-31 19:42:58.132519 | orchestrator | 2025-05-31 19:42:58.133057 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-31 19:42:58.133621 | orchestrator | Saturday 31 May 2025 19:42:57 +0000 (0:00:00.039) 0:06:55.081 ********** 2025-05-31 19:42:58.134056 | orchestrator | 2025-05-31 19:42:58.134714 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-31 19:42:58.135202 | orchestrator | Saturday 31 May 2025 19:42:57 +0000 (0:00:00.044) 0:06:55.126 ********** 2025-05-31 19:42:58.135718 | orchestrator | 2025-05-31 19:42:58.136145 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-31 19:42:58.136638 | orchestrator | Saturday 31 May 2025 19:42:57 +0000 (0:00:00.040) 0:06:55.166 ********** 2025-05-31 19:42:58.139766 | orchestrator | 2025-05-31 19:42:58.139817 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-31 19:42:58.139828 | orchestrator | Saturday 31 May 2025 19:42:57 +0000 (0:00:00.036) 0:06:55.203 ********** 2025-05-31 19:42:58.139835 | orchestrator | 2025-05-31 19:42:58.139843 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-31 19:42:58.139850 | orchestrator | Saturday 31 May 2025 19:42:58 +0000 (0:00:00.065) 0:06:55.269 ********** 2025-05-31 19:42:58.139856 | orchestrator | 2025-05-31 19:42:58.139863 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-31 19:42:58.140083 | orchestrator | Saturday 31 May 2025 19:42:58 +0000 (0:00:00.055) 0:06:55.324 ********** 2025-05-31 19:42:58.140366 | orchestrator | 2025-05-31 19:42:58.140790 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-05-31 19:42:58.141129 | orchestrator | Saturday 31 May 2025 19:42:58 +0000 (0:00:00.041) 0:06:55.366 ********** 2025-05-31 19:42:59.418100 | orchestrator | ok: [testbed-node-1] 2025-05-31 19:42:59.418254 | orchestrator | ok: [testbed-node-0] 2025-05-31 19:42:59.419988 | orchestrator | ok: [testbed-node-2] 2025-05-31 19:42:59.420932 | orchestrator | 2025-05-31 19:42:59.422004 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-05-31 19:42:59.423443 | orchestrator | Saturday 31 May 2025 19:42:59 +0000 (0:00:01.285) 0:06:56.652 ********** 2025-05-31 19:43:00.764267 | orchestrator | changed: [testbed-manager] 2025-05-31 19:43:00.765279 | orchestrator | changed: [testbed-node-3] 2025-05-31 19:43:00.766925 | orchestrator | changed: [testbed-node-4] 2025-05-31 19:43:00.767822 | orchestrator | changed: [testbed-node-0] 2025-05-31 19:43:00.768599 | orchestrator | changed: [testbed-node-1] 2025-05-31 19:43:00.769509 | orchestrator | changed: [testbed-node-2] 2025-05-31 19:43:00.770135 | orchestrator | changed: [testbed-node-5] 2025-05-31 19:43:00.770523 | orchestrator | 2025-05-31 19:43:00.770906 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-05-31 19:43:00.771839 | orchestrator | Saturday 31 May 2025 19:43:00 +0000 (0:00:01.347) 0:06:57.999 ********** 2025-05-31 19:43:01.872591 | orchestrator | changed: [testbed-manager] 2025-05-31 19:43:01.873021 | orchestrator | changed: [testbed-node-3] 2025-05-31 19:43:01.873052 | orchestrator | changed: [testbed-node-4] 2025-05-31 19:43:01.873064 | orchestrator | changed: [testbed-node-5] 2025-05-31 19:43:01.873367 | orchestrator | changed: [testbed-node-0] 2025-05-31 19:43:01.873659 | orchestrator | changed: [testbed-node-1] 2025-05-31 19:43:01.874252 | orchestrator | changed: [testbed-node-2] 2025-05-31 19:43:01.874550 | orchestrator | 2025-05-31 19:43:01.875049 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-05-31 19:43:01.875544 | orchestrator | Saturday 31 May 2025 19:43:01 +0000 (0:00:01.105) 0:06:59.105 ********** 2025-05-31 19:43:02.016675 | orchestrator | skipping: [testbed-manager] 2025-05-31 19:43:04.354636 | orchestrator | changed: [testbed-node-5] 2025-05-31 19:43:04.355622 | orchestrator | changed: [testbed-node-3] 2025-05-31 19:43:04.359501 | orchestrator | changed: [testbed-node-4] 2025-05-31 19:43:04.359547 | orchestrator | changed: [testbed-node-2] 2025-05-31 19:43:04.359567 | orchestrator | changed: [testbed-node-0] 2025-05-31 19:43:04.360702 | orchestrator | changed: [testbed-node-1] 2025-05-31 19:43:04.361603 | orchestrator | 2025-05-31 19:43:04.362098 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-05-31 19:43:04.362884 | orchestrator | Saturday 31 May 2025 19:43:04 +0000 (0:00:02.483) 0:07:01.588 ********** 2025-05-31 19:43:04.456893 | orchestrator | skipping: [testbed-node-3] 2025-05-31 19:43:04.457049 | orchestrator | 2025-05-31 19:43:04.457587 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-05-31 19:43:04.458896 | orchestrator | Saturday 31 May 2025 19:43:04 +0000 (0:00:00.106) 0:07:01.695 ********** 2025-05-31 19:43:05.501317 | orchestrator | ok: [testbed-manager] 2025-05-31 19:43:05.501915 | orchestrator | changed: [testbed-node-4] 2025-05-31 19:43:05.503026 | orchestrator | changed: [testbed-node-3] 2025-05-31 19:43:05.504060 | orchestrator | changed: [testbed-node-5] 2025-05-31 19:43:05.504274 | orchestrator | changed: [testbed-node-0] 2025-05-31 19:43:05.504875 | orchestrator | changed: [testbed-node-1] 2025-05-31 19:43:05.506514 | orchestrator | changed: [testbed-node-2] 2025-05-31 19:43:05.506543 | orchestrator | 2025-05-31 19:43:05.506974 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-05-31 19:43:05.507197 | orchestrator | Saturday 31 May 2025 19:43:05 +0000 (0:00:01.041) 0:07:02.736 ********** 2025-05-31 19:43:05.903905 | orchestrator | skipping: [testbed-manager] 2025-05-31 19:43:05.989294 | orchestrator | skipping: [testbed-node-3] 2025-05-31 19:43:06.062172 | orchestrator | skipping: [testbed-node-4] 2025-05-31 19:43:06.142204 | orchestrator | skipping: [testbed-node-5] 2025-05-31 19:43:06.212068 | orchestrator | skipping: [testbed-node-0] 2025-05-31 19:43:06.333956 | orchestrator | skipping: [testbed-node-1] 2025-05-31 19:43:06.335054 | orchestrator | skipping: [testbed-node-2] 2025-05-31 19:43:06.336633 | orchestrator | 2025-05-31 19:43:06.337250 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-05-31 19:43:06.340657 | orchestrator | Saturday 31 May 2025 19:43:06 +0000 (0:00:00.834) 0:07:03.570 ********** 2025-05-31 19:43:07.264997 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 19:43:07.265973 | orchestrator | 2025-05-31 19:43:07.266435 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-05-31 19:43:07.268120 | orchestrator | Saturday 31 May 2025 19:43:07 +0000 (0:00:00.933) 0:07:04.503 ********** 2025-05-31 19:43:07.720126 | orchestrator | ok: [testbed-manager] 2025-05-31 19:43:08.154747 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:43:08.155179 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:43:08.155311 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:43:08.156553 | orchestrator | ok: [testbed-node-0] 2025-05-31 19:43:08.159834 | orchestrator | ok: [testbed-node-1] 2025-05-31 19:43:08.160392 | orchestrator | ok: [testbed-node-2] 2025-05-31 19:43:08.161177 | orchestrator | 2025-05-31 19:43:08.161902 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-05-31 19:43:08.162787 | orchestrator | Saturday 31 May 2025 19:43:08 +0000 (0:00:00.889) 0:07:05.393 ********** 2025-05-31 19:43:10.812854 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-05-31 19:43:10.813041 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-05-31 19:43:10.814376 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-05-31 19:43:10.818578 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-05-31 19:43:10.819639 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-05-31 19:43:10.820671 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-05-31 19:43:10.821273 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-05-31 19:43:10.822153 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-05-31 19:43:10.823370 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-05-31 19:43:10.824343 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-05-31 19:43:10.825162 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-05-31 19:43:10.825938 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-05-31 19:43:10.827169 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-05-31 19:43:10.828613 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-05-31 19:43:10.828636 | orchestrator | 2025-05-31 19:43:10.829076 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-05-31 19:43:10.829931 | orchestrator | Saturday 31 May 2025 19:43:10 +0000 (0:00:02.654) 0:07:08.047 ********** 2025-05-31 19:43:10.974850 | orchestrator | skipping: [testbed-manager] 2025-05-31 19:43:11.054735 | orchestrator | skipping: [testbed-node-3] 2025-05-31 19:43:11.131709 | orchestrator | skipping: [testbed-node-4] 2025-05-31 19:43:11.198314 | orchestrator | skipping: [testbed-node-5] 2025-05-31 19:43:11.260594 | orchestrator | skipping: [testbed-node-0] 2025-05-31 19:43:11.358442 | orchestrator | skipping: [testbed-node-1] 2025-05-31 19:43:11.359857 | orchestrator | skipping: [testbed-node-2] 2025-05-31 19:43:11.360088 | orchestrator | 2025-05-31 19:43:11.361189 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-05-31 19:43:11.361219 | orchestrator | Saturday 31 May 2025 19:43:11 +0000 (0:00:00.550) 0:07:08.597 ********** 2025-05-31 19:43:12.182942 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 19:43:12.183621 | orchestrator | 2025-05-31 19:43:12.184839 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-05-31 19:43:12.188806 | orchestrator | Saturday 31 May 2025 19:43:12 +0000 (0:00:00.820) 0:07:09.418 ********** 2025-05-31 19:43:12.861639 | orchestrator | ok: [testbed-manager] 2025-05-31 19:43:12.936817 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:43:13.387203 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:43:13.387310 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:43:13.387403 | orchestrator | ok: [testbed-node-0] 2025-05-31 19:43:13.387420 | orchestrator | ok: [testbed-node-1] 2025-05-31 19:43:13.388239 | orchestrator | ok: [testbed-node-2] 2025-05-31 19:43:13.388976 | orchestrator | 2025-05-31 19:43:13.389565 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-05-31 19:43:13.390139 | orchestrator | Saturday 31 May 2025 19:43:13 +0000 (0:00:01.203) 0:07:10.622 ********** 2025-05-31 19:43:13.855018 | orchestrator | ok: [testbed-manager] 2025-05-31 19:43:14.262519 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:43:14.264325 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:43:14.265277 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:43:14.265816 | orchestrator | ok: [testbed-node-0] 2025-05-31 19:43:14.266918 | orchestrator | ok: [testbed-node-1] 2025-05-31 19:43:14.267946 | orchestrator | ok: [testbed-node-2] 2025-05-31 19:43:14.268807 | orchestrator | 2025-05-31 19:43:14.269432 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-05-31 19:43:14.272359 | orchestrator | Saturday 31 May 2025 19:43:14 +0000 (0:00:00.874) 0:07:11.496 ********** 2025-05-31 19:43:14.400943 | orchestrator | skipping: [testbed-manager] 2025-05-31 19:43:14.465572 | orchestrator | skipping: [testbed-node-3] 2025-05-31 19:43:14.526925 | orchestrator | skipping: [testbed-node-4] 2025-05-31 19:43:14.594645 | orchestrator | skipping: [testbed-node-5] 2025-05-31 19:43:14.657619 | orchestrator | skipping: [testbed-node-0] 2025-05-31 19:43:14.741646 | orchestrator | skipping: [testbed-node-1] 2025-05-31 19:43:14.742595 | orchestrator | skipping: [testbed-node-2] 2025-05-31 19:43:14.743657 | orchestrator | 2025-05-31 19:43:14.744907 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-05-31 19:43:14.745651 | orchestrator | Saturday 31 May 2025 19:43:14 +0000 (0:00:00.482) 0:07:11.979 ********** 2025-05-31 19:43:16.305856 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:43:16.309208 | orchestrator | ok: [testbed-manager] 2025-05-31 19:43:16.309940 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:43:16.310757 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:43:16.311428 | orchestrator | ok: [testbed-node-2] 2025-05-31 19:43:16.311969 | orchestrator | ok: [testbed-node-0] 2025-05-31 19:43:16.312798 | orchestrator | ok: [testbed-node-1] 2025-05-31 19:43:16.313245 | orchestrator | 2025-05-31 19:43:16.315131 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-05-31 19:43:16.315388 | orchestrator | Saturday 31 May 2025 19:43:16 +0000 (0:00:01.558) 0:07:13.537 ********** 2025-05-31 19:43:16.433269 | orchestrator | skipping: [testbed-manager] 2025-05-31 19:43:16.498258 | orchestrator | skipping: [testbed-node-3] 2025-05-31 19:43:16.559020 | orchestrator | skipping: [testbed-node-4] 2025-05-31 19:43:16.619642 | orchestrator | skipping: [testbed-node-5] 2025-05-31 19:43:16.691989 | orchestrator | skipping: [testbed-node-0] 2025-05-31 19:43:16.789881 | orchestrator | skipping: [testbed-node-1] 2025-05-31 19:43:16.790920 | orchestrator | skipping: [testbed-node-2] 2025-05-31 19:43:16.793866 | orchestrator | 2025-05-31 19:43:16.793890 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-05-31 19:43:16.793911 | orchestrator | Saturday 31 May 2025 19:43:16 +0000 (0:00:00.489) 0:07:14.027 ********** 2025-05-31 19:43:25.099153 | orchestrator | ok: [testbed-manager] 2025-05-31 19:43:25.099352 | orchestrator | changed: [testbed-node-3] 2025-05-31 19:43:25.100756 | orchestrator | changed: [testbed-node-4] 2025-05-31 19:43:25.101839 | orchestrator | changed: [testbed-node-5] 2025-05-31 19:43:25.103997 | orchestrator | changed: [testbed-node-1] 2025-05-31 19:43:25.104286 | orchestrator | changed: [testbed-node-2] 2025-05-31 19:43:25.105499 | orchestrator | changed: [testbed-node-0] 2025-05-31 19:43:25.106102 | orchestrator | 2025-05-31 19:43:25.106709 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-05-31 19:43:25.107395 | orchestrator | Saturday 31 May 2025 19:43:25 +0000 (0:00:08.306) 0:07:22.334 ********** 2025-05-31 19:43:26.488264 | orchestrator | ok: [testbed-manager] 2025-05-31 19:43:26.489817 | orchestrator | changed: [testbed-node-3] 2025-05-31 19:43:26.489903 | orchestrator | changed: [testbed-node-4] 2025-05-31 19:43:26.491137 | orchestrator | changed: [testbed-node-5] 2025-05-31 19:43:26.492815 | orchestrator | changed: [testbed-node-0] 2025-05-31 19:43:26.493059 | orchestrator | changed: [testbed-node-1] 2025-05-31 19:43:26.494119 | orchestrator | changed: [testbed-node-2] 2025-05-31 19:43:26.494349 | orchestrator | 2025-05-31 19:43:26.495199 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-05-31 19:43:26.495221 | orchestrator | Saturday 31 May 2025 19:43:26 +0000 (0:00:01.391) 0:07:23.725 ********** 2025-05-31 19:43:28.250113 | orchestrator | ok: [testbed-manager] 2025-05-31 19:43:28.252614 | orchestrator | changed: [testbed-node-4] 2025-05-31 19:43:28.255823 | orchestrator | changed: [testbed-node-3] 2025-05-31 19:43:28.256945 | orchestrator | changed: [testbed-node-5] 2025-05-31 19:43:28.258135 | orchestrator | changed: [testbed-node-0] 2025-05-31 19:43:28.258884 | orchestrator | changed: [testbed-node-1] 2025-05-31 19:43:28.259788 | orchestrator | changed: [testbed-node-2] 2025-05-31 19:43:28.260504 | orchestrator | 2025-05-31 19:43:28.261229 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-05-31 19:43:28.262104 | orchestrator | Saturday 31 May 2025 19:43:28 +0000 (0:00:01.759) 0:07:25.484 ********** 2025-05-31 19:43:30.036866 | orchestrator | ok: [testbed-manager] 2025-05-31 19:43:30.037134 | orchestrator | changed: [testbed-node-3] 2025-05-31 19:43:30.037349 | orchestrator | changed: [testbed-node-4] 2025-05-31 19:43:30.038102 | orchestrator | changed: [testbed-node-5] 2025-05-31 19:43:30.038620 | orchestrator | changed: [testbed-node-1] 2025-05-31 19:43:30.039153 | orchestrator | changed: [testbed-node-2] 2025-05-31 19:43:30.040931 | orchestrator | changed: [testbed-node-0] 2025-05-31 19:43:30.041002 | orchestrator | 2025-05-31 19:43:30.042651 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-05-31 19:43:30.042698 | orchestrator | Saturday 31 May 2025 19:43:30 +0000 (0:00:01.786) 0:07:27.271 ********** 2025-05-31 19:43:30.467993 | orchestrator | ok: [testbed-manager] 2025-05-31 19:43:31.208979 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:43:31.210400 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:43:31.211841 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:43:31.212356 | orchestrator | ok: [testbed-node-0] 2025-05-31 19:43:31.213402 | orchestrator | ok: [testbed-node-1] 2025-05-31 19:43:31.213949 | orchestrator | ok: [testbed-node-2] 2025-05-31 19:43:31.214395 | orchestrator | 2025-05-31 19:43:31.215239 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-05-31 19:43:31.216204 | orchestrator | Saturday 31 May 2025 19:43:31 +0000 (0:00:01.176) 0:07:28.447 ********** 2025-05-31 19:43:31.349224 | orchestrator | skipping: [testbed-manager] 2025-05-31 19:43:31.424268 | orchestrator | skipping: [testbed-node-3] 2025-05-31 19:43:31.490325 | orchestrator | skipping: [testbed-node-4] 2025-05-31 19:43:31.560349 | orchestrator | skipping: [testbed-node-5] 2025-05-31 19:43:31.638312 | orchestrator | skipping: [testbed-node-0] 2025-05-31 19:43:32.042686 | orchestrator | skipping: [testbed-node-1] 2025-05-31 19:43:32.043266 | orchestrator | skipping: [testbed-node-2] 2025-05-31 19:43:32.044782 | orchestrator | 2025-05-31 19:43:32.045344 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-05-31 19:43:32.046382 | orchestrator | Saturday 31 May 2025 19:43:32 +0000 (0:00:00.832) 0:07:29.279 ********** 2025-05-31 19:43:32.192330 | orchestrator | skipping: [testbed-manager] 2025-05-31 19:43:32.262596 | orchestrator | skipping: [testbed-node-3] 2025-05-31 19:43:32.343281 | orchestrator | skipping: [testbed-node-4] 2025-05-31 19:43:32.406004 | orchestrator | skipping: [testbed-node-5] 2025-05-31 19:43:32.476928 | orchestrator | skipping: [testbed-node-0] 2025-05-31 19:43:32.590804 | orchestrator | skipping: [testbed-node-1] 2025-05-31 19:43:32.591135 | orchestrator | skipping: [testbed-node-2] 2025-05-31 19:43:32.592337 | orchestrator | 2025-05-31 19:43:32.593704 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-05-31 19:43:32.597442 | orchestrator | Saturday 31 May 2025 19:43:32 +0000 (0:00:00.549) 0:07:29.829 ********** 2025-05-31 19:43:32.732274 | orchestrator | ok: [testbed-manager] 2025-05-31 19:43:32.807923 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:43:32.876099 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:43:32.947539 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:43:33.292830 | orchestrator | ok: [testbed-node-0] 2025-05-31 19:43:33.414637 | orchestrator | ok: [testbed-node-1] 2025-05-31 19:43:33.414740 | orchestrator | ok: [testbed-node-2] 2025-05-31 19:43:33.415558 | orchestrator | 2025-05-31 19:43:33.415593 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-05-31 19:43:33.416792 | orchestrator | Saturday 31 May 2025 19:43:33 +0000 (0:00:00.821) 0:07:30.650 ********** 2025-05-31 19:43:33.567782 | orchestrator | ok: [testbed-manager] 2025-05-31 19:43:33.640144 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:43:33.711160 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:43:33.786536 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:43:33.857183 | orchestrator | ok: [testbed-node-0] 2025-05-31 19:43:33.962690 | orchestrator | ok: [testbed-node-1] 2025-05-31 19:43:33.963229 | orchestrator | ok: [testbed-node-2] 2025-05-31 19:43:33.963676 | orchestrator | 2025-05-31 19:43:33.964404 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-05-31 19:43:33.965036 | orchestrator | Saturday 31 May 2025 19:43:33 +0000 (0:00:00.549) 0:07:31.200 ********** 2025-05-31 19:43:34.115734 | orchestrator | ok: [testbed-manager] 2025-05-31 19:43:34.191662 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:43:34.270241 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:43:34.346595 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:43:34.412711 | orchestrator | ok: [testbed-node-0] 2025-05-31 19:43:34.530331 | orchestrator | ok: [testbed-node-1] 2025-05-31 19:43:34.531449 | orchestrator | ok: [testbed-node-2] 2025-05-31 19:43:34.532725 | orchestrator | 2025-05-31 19:43:34.533653 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-05-31 19:43:34.536650 | orchestrator | Saturday 31 May 2025 19:43:34 +0000 (0:00:00.568) 0:07:31.769 ********** 2025-05-31 19:43:40.324291 | orchestrator | ok: [testbed-manager] 2025-05-31 19:43:40.325358 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:43:40.326102 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:43:40.326731 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:43:40.327283 | orchestrator | ok: [testbed-node-2] 2025-05-31 19:43:40.327750 | orchestrator | ok: [testbed-node-1] 2025-05-31 19:43:40.328443 | orchestrator | ok: [testbed-node-0] 2025-05-31 19:43:40.328920 | orchestrator | 2025-05-31 19:43:40.329835 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-05-31 19:43:40.330015 | orchestrator | Saturday 31 May 2025 19:43:40 +0000 (0:00:05.791) 0:07:37.560 ********** 2025-05-31 19:43:40.504059 | orchestrator | skipping: [testbed-manager] 2025-05-31 19:43:40.574539 | orchestrator | skipping: [testbed-node-3] 2025-05-31 19:43:40.648012 | orchestrator | skipping: [testbed-node-4] 2025-05-31 19:43:40.727326 | orchestrator | skipping: [testbed-node-5] 2025-05-31 19:43:40.798320 | orchestrator | skipping: [testbed-node-0] 2025-05-31 19:43:40.923749 | orchestrator | skipping: [testbed-node-1] 2025-05-31 19:43:40.924029 | orchestrator | skipping: [testbed-node-2] 2025-05-31 19:43:40.924553 | orchestrator | 2025-05-31 19:43:40.925760 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-05-31 19:43:40.926589 | orchestrator | Saturday 31 May 2025 19:43:40 +0000 (0:00:00.599) 0:07:38.159 ********** 2025-05-31 19:43:42.071684 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 19:43:42.071843 | orchestrator | 2025-05-31 19:43:42.071935 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-05-31 19:43:42.072725 | orchestrator | Saturday 31 May 2025 19:43:42 +0000 (0:00:01.148) 0:07:39.308 ********** 2025-05-31 19:43:43.934901 | orchestrator | ok: [testbed-manager] 2025-05-31 19:43:43.935050 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:43:43.935559 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:43:43.936295 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:43:43.936773 | orchestrator | ok: [testbed-node-1] 2025-05-31 19:43:43.938246 | orchestrator | ok: [testbed-node-0] 2025-05-31 19:43:43.938848 | orchestrator | ok: [testbed-node-2] 2025-05-31 19:43:43.939599 | orchestrator | 2025-05-31 19:43:43.939692 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-05-31 19:43:43.941601 | orchestrator | Saturday 31 May 2025 19:43:43 +0000 (0:00:01.861) 0:07:41.169 ********** 2025-05-31 19:43:45.131664 | orchestrator | ok: [testbed-manager] 2025-05-31 19:43:45.132530 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:43:45.134994 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:43:45.135569 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:43:45.137019 | orchestrator | ok: [testbed-node-0] 2025-05-31 19:43:45.137602 | orchestrator | ok: [testbed-node-1] 2025-05-31 19:43:45.138572 | orchestrator | ok: [testbed-node-2] 2025-05-31 19:43:45.142240 | orchestrator | 2025-05-31 19:43:45.142317 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-05-31 19:43:45.142333 | orchestrator | Saturday 31 May 2025 19:43:45 +0000 (0:00:01.198) 0:07:42.368 ********** 2025-05-31 19:43:45.860989 | orchestrator | ok: [testbed-manager] 2025-05-31 19:43:46.329577 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:43:46.330227 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:43:46.331165 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:43:46.331682 | orchestrator | ok: [testbed-node-0] 2025-05-31 19:43:46.333110 | orchestrator | ok: [testbed-node-1] 2025-05-31 19:43:46.333603 | orchestrator | ok: [testbed-node-2] 2025-05-31 19:43:46.334541 | orchestrator | 2025-05-31 19:43:46.335279 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-05-31 19:43:46.336003 | orchestrator | Saturday 31 May 2025 19:43:46 +0000 (0:00:01.195) 0:07:43.564 ********** 2025-05-31 19:43:48.113126 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-31 19:43:48.113513 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-31 19:43:48.113844 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-31 19:43:48.114205 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-31 19:43:48.115064 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-31 19:43:48.115829 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-31 19:43:48.116989 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-31 19:43:48.117395 | orchestrator | 2025-05-31 19:43:48.117730 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-05-31 19:43:48.118294 | orchestrator | Saturday 31 May 2025 19:43:48 +0000 (0:00:01.784) 0:07:45.348 ********** 2025-05-31 19:43:49.027052 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 19:43:49.028102 | orchestrator | 2025-05-31 19:43:49.029719 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-05-31 19:43:49.030314 | orchestrator | Saturday 31 May 2025 19:43:49 +0000 (0:00:00.913) 0:07:46.262 ********** 2025-05-31 19:43:58.512264 | orchestrator | changed: [testbed-manager] 2025-05-31 19:43:58.512755 | orchestrator | changed: [testbed-node-5] 2025-05-31 19:43:58.513390 | orchestrator | changed: [testbed-node-4] 2025-05-31 19:43:58.516544 | orchestrator | changed: [testbed-node-3] 2025-05-31 19:43:58.517718 | orchestrator | changed: [testbed-node-1] 2025-05-31 19:43:58.520083 | orchestrator | changed: [testbed-node-2] 2025-05-31 19:43:58.520913 | orchestrator | changed: [testbed-node-0] 2025-05-31 19:43:58.522640 | orchestrator | 2025-05-31 19:43:58.523921 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-05-31 19:43:58.525525 | orchestrator | Saturday 31 May 2025 19:43:58 +0000 (0:00:09.486) 0:07:55.748 ********** 2025-05-31 19:44:00.558289 | orchestrator | ok: [testbed-manager] 2025-05-31 19:44:00.559484 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:44:00.562986 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:44:00.563012 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:44:00.563024 | orchestrator | ok: [testbed-node-0] 2025-05-31 19:44:00.563035 | orchestrator | ok: [testbed-node-1] 2025-05-31 19:44:00.563046 | orchestrator | ok: [testbed-node-2] 2025-05-31 19:44:00.563938 | orchestrator | 2025-05-31 19:44:00.565525 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-05-31 19:44:00.566090 | orchestrator | Saturday 31 May 2025 19:44:00 +0000 (0:00:02.045) 0:07:57.794 ********** 2025-05-31 19:44:01.853246 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:44:01.853418 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:44:01.854456 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:44:01.855358 | orchestrator | ok: [testbed-node-0] 2025-05-31 19:44:01.856066 | orchestrator | ok: [testbed-node-1] 2025-05-31 19:44:01.857786 | orchestrator | ok: [testbed-node-2] 2025-05-31 19:44:01.858083 | orchestrator | 2025-05-31 19:44:01.859106 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-05-31 19:44:01.860142 | orchestrator | Saturday 31 May 2025 19:44:01 +0000 (0:00:01.293) 0:07:59.087 ********** 2025-05-31 19:44:03.298945 | orchestrator | changed: [testbed-manager] 2025-05-31 19:44:03.299433 | orchestrator | changed: [testbed-node-3] 2025-05-31 19:44:03.300198 | orchestrator | changed: [testbed-node-4] 2025-05-31 19:44:03.301423 | orchestrator | changed: [testbed-node-5] 2025-05-31 19:44:03.302226 | orchestrator | changed: [testbed-node-0] 2025-05-31 19:44:03.302938 | orchestrator | changed: [testbed-node-1] 2025-05-31 19:44:03.305601 | orchestrator | changed: [testbed-node-2] 2025-05-31 19:44:03.306580 | orchestrator | 2025-05-31 19:44:03.306740 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-05-31 19:44:03.307559 | orchestrator | 2025-05-31 19:44:03.307988 | orchestrator | TASK [Include hardening role] ************************************************** 2025-05-31 19:44:03.308731 | orchestrator | Saturday 31 May 2025 19:44:03 +0000 (0:00:01.449) 0:08:00.536 ********** 2025-05-31 19:44:03.456311 | orchestrator | skipping: [testbed-manager] 2025-05-31 19:44:03.522819 | orchestrator | skipping: [testbed-node-3] 2025-05-31 19:44:03.586460 | orchestrator | skipping: [testbed-node-4] 2025-05-31 19:44:03.672036 | orchestrator | skipping: [testbed-node-5] 2025-05-31 19:44:03.735286 | orchestrator | skipping: [testbed-node-0] 2025-05-31 19:44:03.864077 | orchestrator | skipping: [testbed-node-1] 2025-05-31 19:44:03.864895 | orchestrator | skipping: [testbed-node-2] 2025-05-31 19:44:03.866376 | orchestrator | 2025-05-31 19:44:03.866774 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-05-31 19:44:03.870842 | orchestrator | 2025-05-31 19:44:03.870889 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-05-31 19:44:03.870900 | orchestrator | Saturday 31 May 2025 19:44:03 +0000 (0:00:00.564) 0:08:01.100 ********** 2025-05-31 19:44:05.277953 | orchestrator | changed: [testbed-manager] 2025-05-31 19:44:05.278627 | orchestrator | changed: [testbed-node-3] 2025-05-31 19:44:05.280051 | orchestrator | changed: [testbed-node-4] 2025-05-31 19:44:05.281122 | orchestrator | changed: [testbed-node-5] 2025-05-31 19:44:05.282778 | orchestrator | changed: [testbed-node-1] 2025-05-31 19:44:05.283688 | orchestrator | changed: [testbed-node-0] 2025-05-31 19:44:05.284714 | orchestrator | changed: [testbed-node-2] 2025-05-31 19:44:05.285441 | orchestrator | 2025-05-31 19:44:05.286775 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-05-31 19:44:05.286904 | orchestrator | Saturday 31 May 2025 19:44:05 +0000 (0:00:01.415) 0:08:02.516 ********** 2025-05-31 19:44:06.748119 | orchestrator | ok: [testbed-manager] 2025-05-31 19:44:06.749051 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:44:06.750124 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:44:06.755093 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:44:06.756036 | orchestrator | ok: [testbed-node-0] 2025-05-31 19:44:06.758115 | orchestrator | ok: [testbed-node-1] 2025-05-31 19:44:06.759011 | orchestrator | ok: [testbed-node-2] 2025-05-31 19:44:06.759802 | orchestrator | 2025-05-31 19:44:06.760602 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-05-31 19:44:06.761209 | orchestrator | Saturday 31 May 2025 19:44:06 +0000 (0:00:01.467) 0:08:03.983 ********** 2025-05-31 19:44:07.168542 | orchestrator | skipping: [testbed-manager] 2025-05-31 19:44:07.247177 | orchestrator | skipping: [testbed-node-3] 2025-05-31 19:44:07.322983 | orchestrator | skipping: [testbed-node-4] 2025-05-31 19:44:07.392809 | orchestrator | skipping: [testbed-node-5] 2025-05-31 19:44:07.462332 | orchestrator | skipping: [testbed-node-0] 2025-05-31 19:44:07.871581 | orchestrator | skipping: [testbed-node-1] 2025-05-31 19:44:07.871780 | orchestrator | skipping: [testbed-node-2] 2025-05-31 19:44:07.873252 | orchestrator | 2025-05-31 19:44:07.874409 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-05-31 19:44:07.877079 | orchestrator | Saturday 31 May 2025 19:44:07 +0000 (0:00:01.125) 0:08:05.108 ********** 2025-05-31 19:44:09.168136 | orchestrator | changed: [testbed-manager] 2025-05-31 19:44:09.169401 | orchestrator | changed: [testbed-node-3] 2025-05-31 19:44:09.171165 | orchestrator | changed: [testbed-node-4] 2025-05-31 19:44:09.172305 | orchestrator | changed: [testbed-node-5] 2025-05-31 19:44:09.173084 | orchestrator | changed: [testbed-node-0] 2025-05-31 19:44:09.173758 | orchestrator | changed: [testbed-node-1] 2025-05-31 19:44:09.174239 | orchestrator | changed: [testbed-node-2] 2025-05-31 19:44:09.174640 | orchestrator | 2025-05-31 19:44:09.175424 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-05-31 19:44:09.176013 | orchestrator | 2025-05-31 19:44:09.176520 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-05-31 19:44:09.176978 | orchestrator | Saturday 31 May 2025 19:44:09 +0000 (0:00:01.294) 0:08:06.403 ********** 2025-05-31 19:44:10.197215 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 19:44:10.197306 | orchestrator | 2025-05-31 19:44:10.197638 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-05-31 19:44:10.200602 | orchestrator | Saturday 31 May 2025 19:44:10 +0000 (0:00:01.025) 0:08:07.429 ********** 2025-05-31 19:44:10.607969 | orchestrator | ok: [testbed-manager] 2025-05-31 19:44:11.116183 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:44:11.116633 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:44:11.119225 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:44:11.119306 | orchestrator | ok: [testbed-node-0] 2025-05-31 19:44:11.119313 | orchestrator | ok: [testbed-node-1] 2025-05-31 19:44:11.119318 | orchestrator | ok: [testbed-node-2] 2025-05-31 19:44:11.119322 | orchestrator | 2025-05-31 19:44:11.119356 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-05-31 19:44:11.119624 | orchestrator | Saturday 31 May 2025 19:44:11 +0000 (0:00:00.922) 0:08:08.351 ********** 2025-05-31 19:44:12.350946 | orchestrator | changed: [testbed-manager] 2025-05-31 19:44:12.351582 | orchestrator | changed: [testbed-node-4] 2025-05-31 19:44:12.352577 | orchestrator | changed: [testbed-node-3] 2025-05-31 19:44:12.356336 | orchestrator | changed: [testbed-node-5] 2025-05-31 19:44:12.356364 | orchestrator | changed: [testbed-node-0] 2025-05-31 19:44:12.356376 | orchestrator | changed: [testbed-node-1] 2025-05-31 19:44:12.356387 | orchestrator | changed: [testbed-node-2] 2025-05-31 19:44:12.356399 | orchestrator | 2025-05-31 19:44:12.356411 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-05-31 19:44:12.357319 | orchestrator | Saturday 31 May 2025 19:44:12 +0000 (0:00:01.234) 0:08:09.585 ********** 2025-05-31 19:44:13.345434 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 19:44:13.345686 | orchestrator | 2025-05-31 19:44:13.346460 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-05-31 19:44:13.347066 | orchestrator | Saturday 31 May 2025 19:44:13 +0000 (0:00:00.996) 0:08:10.581 ********** 2025-05-31 19:44:13.754313 | orchestrator | ok: [testbed-manager] 2025-05-31 19:44:14.180909 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:44:14.181924 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:44:14.183012 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:44:14.183960 | orchestrator | ok: [testbed-node-0] 2025-05-31 19:44:14.184460 | orchestrator | ok: [testbed-node-1] 2025-05-31 19:44:14.184954 | orchestrator | ok: [testbed-node-2] 2025-05-31 19:44:14.186233 | orchestrator | 2025-05-31 19:44:14.186428 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-05-31 19:44:14.186981 | orchestrator | Saturday 31 May 2025 19:44:14 +0000 (0:00:00.828) 0:08:11.409 ********** 2025-05-31 19:44:14.607799 | orchestrator | changed: [testbed-manager] 2025-05-31 19:44:15.286816 | orchestrator | changed: [testbed-node-3] 2025-05-31 19:44:15.286932 | orchestrator | changed: [testbed-node-4] 2025-05-31 19:44:15.287267 | orchestrator | changed: [testbed-node-5] 2025-05-31 19:44:15.288653 | orchestrator | changed: [testbed-node-1] 2025-05-31 19:44:15.289869 | orchestrator | changed: [testbed-node-0] 2025-05-31 19:44:15.290784 | orchestrator | changed: [testbed-node-2] 2025-05-31 19:44:15.291651 | orchestrator | 2025-05-31 19:44:15.292747 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 19:44:15.293285 | orchestrator | 2025-05-31 19:44:15 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-31 19:44:15.293686 | orchestrator | 2025-05-31 19:44:15 | INFO  | Please wait and do not abort execution. 2025-05-31 19:44:15.294809 | orchestrator | testbed-manager : ok=162  changed=38  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-05-31 19:44:15.295593 | orchestrator | testbed-node-0 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-31 19:44:15.296385 | orchestrator | testbed-node-1 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-31 19:44:15.297393 | orchestrator | testbed-node-2 : ok=170  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-31 19:44:15.298310 | orchestrator | testbed-node-3 : ok=169  changed=63  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-05-31 19:44:15.298924 | orchestrator | testbed-node-4 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-31 19:44:15.299786 | orchestrator | testbed-node-5 : ok=169  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-31 19:44:15.300139 | orchestrator | 2025-05-31 19:44:15.300922 | orchestrator | 2025-05-31 19:44:15.301379 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-31 19:44:15.301972 | orchestrator | Saturday 31 May 2025 19:44:15 +0000 (0:00:01.114) 0:08:12.523 ********** 2025-05-31 19:44:15.302280 | orchestrator | =============================================================================== 2025-05-31 19:44:15.303318 | orchestrator | osism.commons.packages : Install required packages --------------------- 78.23s 2025-05-31 19:44:15.303359 | orchestrator | osism.commons.packages : Download required packages -------------------- 35.89s 2025-05-31 19:44:15.303913 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 34.94s 2025-05-31 19:44:15.304207 | orchestrator | osism.commons.repository : Update package cache ------------------------ 14.35s 2025-05-31 19:44:15.304634 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 11.62s 2025-05-31 19:44:15.305114 | orchestrator | osism.services.docker : Install docker package ------------------------- 11.60s 2025-05-31 19:44:15.305492 | orchestrator | osism.commons.packages : Upgrade packages ------------------------------ 11.34s 2025-05-31 19:44:15.305935 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 11.22s 2025-05-31 19:44:15.306317 | orchestrator | osism.services.docker : Install apt-transport-https package ------------ 10.94s 2025-05-31 19:44:15.307525 | orchestrator | osism.services.docker : Install containerd package --------------------- 10.23s 2025-05-31 19:44:15.308523 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.49s 2025-05-31 19:44:15.309060 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 9.33s 2025-05-31 19:44:15.309662 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.57s 2025-05-31 19:44:15.310103 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.51s 2025-05-31 19:44:15.310877 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 8.31s 2025-05-31 19:44:15.311268 | orchestrator | osism.services.rng : Install rng package -------------------------------- 7.95s 2025-05-31 19:44:15.311883 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.92s 2025-05-31 19:44:15.311960 | orchestrator | osism.services.docker : Add repository ---------------------------------- 7.76s 2025-05-31 19:44:15.312828 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 6.56s 2025-05-31 19:44:15.313663 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.79s 2025-05-31 19:44:15.994971 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-05-31 19:44:15.995061 | orchestrator | + osism apply network 2025-05-31 19:44:18.096093 | orchestrator | Registering Redlock._acquired_script 2025-05-31 19:44:18.096174 | orchestrator | Registering Redlock._extend_script 2025-05-31 19:44:18.096184 | orchestrator | Registering Redlock._release_script 2025-05-31 19:44:18.158942 | orchestrator | 2025-05-31 19:44:18 | INFO  | Task 0d8a44b9-2938-4c08-8de7-e79ddaad1f08 (network) was prepared for execution. 2025-05-31 19:44:18.159039 | orchestrator | 2025-05-31 19:44:18 | INFO  | It takes a moment until task 0d8a44b9-2938-4c08-8de7-e79ddaad1f08 (network) has been started and output is visible here. 2025-05-31 19:44:22.322837 | orchestrator | 2025-05-31 19:44:22.323134 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-05-31 19:44:22.323380 | orchestrator | 2025-05-31 19:44:22.324531 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-05-31 19:44:22.326213 | orchestrator | Saturday 31 May 2025 19:44:22 +0000 (0:00:00.293) 0:00:00.293 ********** 2025-05-31 19:44:22.470374 | orchestrator | ok: [testbed-manager] 2025-05-31 19:44:22.546219 | orchestrator | ok: [testbed-node-0] 2025-05-31 19:44:22.621359 | orchestrator | ok: [testbed-node-1] 2025-05-31 19:44:22.695511 | orchestrator | ok: [testbed-node-2] 2025-05-31 19:44:22.875684 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:44:22.996197 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:44:22.996382 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:44:22.997367 | orchestrator | 2025-05-31 19:44:22.997774 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-05-31 19:44:22.998375 | orchestrator | Saturday 31 May 2025 19:44:22 +0000 (0:00:00.672) 0:00:00.965 ********** 2025-05-31 19:44:24.143345 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 19:44:24.143609 | orchestrator | 2025-05-31 19:44:24.144273 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-05-31 19:44:24.145093 | orchestrator | Saturday 31 May 2025 19:44:24 +0000 (0:00:01.146) 0:00:02.112 ********** 2025-05-31 19:44:26.027007 | orchestrator | ok: [testbed-manager] 2025-05-31 19:44:26.027233 | orchestrator | ok: [testbed-node-1] 2025-05-31 19:44:26.031135 | orchestrator | ok: [testbed-node-2] 2025-05-31 19:44:26.032605 | orchestrator | ok: [testbed-node-0] 2025-05-31 19:44:26.033314 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:44:26.034224 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:44:26.036517 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:44:26.036987 | orchestrator | 2025-05-31 19:44:26.037440 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-05-31 19:44:26.040541 | orchestrator | Saturday 31 May 2025 19:44:26 +0000 (0:00:01.885) 0:00:03.998 ********** 2025-05-31 19:44:27.702948 | orchestrator | ok: [testbed-manager] 2025-05-31 19:44:27.703810 | orchestrator | ok: [testbed-node-0] 2025-05-31 19:44:27.708023 | orchestrator | ok: [testbed-node-1] 2025-05-31 19:44:27.708363 | orchestrator | ok: [testbed-node-2] 2025-05-31 19:44:27.709236 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:44:27.710179 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:44:27.711624 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:44:27.711958 | orchestrator | 2025-05-31 19:44:27.712896 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-05-31 19:44:27.715559 | orchestrator | Saturday 31 May 2025 19:44:27 +0000 (0:00:01.672) 0:00:05.670 ********** 2025-05-31 19:44:28.227800 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-05-31 19:44:28.228525 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-05-31 19:44:28.229159 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-05-31 19:44:28.657068 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-05-31 19:44:28.658187 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-05-31 19:44:28.659281 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-05-31 19:44:28.660353 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-05-31 19:44:28.661363 | orchestrator | 2025-05-31 19:44:28.662452 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-05-31 19:44:28.663624 | orchestrator | Saturday 31 May 2025 19:44:28 +0000 (0:00:00.958) 0:00:06.629 ********** 2025-05-31 19:44:31.782444 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-05-31 19:44:31.783855 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-31 19:44:31.784876 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-31 19:44:31.786914 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-05-31 19:44:31.787698 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-31 19:44:31.788608 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-31 19:44:31.789789 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-31 19:44:31.790339 | orchestrator | 2025-05-31 19:44:31.791000 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-05-31 19:44:31.791805 | orchestrator | Saturday 31 May 2025 19:44:31 +0000 (0:00:03.123) 0:00:09.752 ********** 2025-05-31 19:44:33.198826 | orchestrator | changed: [testbed-manager] 2025-05-31 19:44:33.200193 | orchestrator | changed: [testbed-node-0] 2025-05-31 19:44:33.200661 | orchestrator | changed: [testbed-node-1] 2025-05-31 19:44:33.202174 | orchestrator | changed: [testbed-node-2] 2025-05-31 19:44:33.203009 | orchestrator | changed: [testbed-node-3] 2025-05-31 19:44:33.204128 | orchestrator | changed: [testbed-node-4] 2025-05-31 19:44:33.204967 | orchestrator | changed: [testbed-node-5] 2025-05-31 19:44:33.205520 | orchestrator | 2025-05-31 19:44:33.206540 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-05-31 19:44:33.207236 | orchestrator | Saturday 31 May 2025 19:44:33 +0000 (0:00:01.417) 0:00:11.170 ********** 2025-05-31 19:44:34.853231 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-31 19:44:34.854757 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-31 19:44:34.856045 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-31 19:44:34.857330 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-05-31 19:44:34.857843 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-05-31 19:44:34.858854 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-31 19:44:34.859773 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-31 19:44:34.859973 | orchestrator | 2025-05-31 19:44:34.860864 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-05-31 19:44:34.861223 | orchestrator | Saturday 31 May 2025 19:44:34 +0000 (0:00:01.655) 0:00:12.826 ********** 2025-05-31 19:44:35.238504 | orchestrator | ok: [testbed-manager] 2025-05-31 19:44:35.911150 | orchestrator | ok: [testbed-node-0] 2025-05-31 19:44:35.911258 | orchestrator | ok: [testbed-node-1] 2025-05-31 19:44:35.914109 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:44:35.914146 | orchestrator | ok: [testbed-node-2] 2025-05-31 19:44:35.914272 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:44:35.914375 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:44:35.915154 | orchestrator | 2025-05-31 19:44:35.916270 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-05-31 19:44:35.916707 | orchestrator | Saturday 31 May 2025 19:44:35 +0000 (0:00:01.052) 0:00:13.878 ********** 2025-05-31 19:44:36.076727 | orchestrator | skipping: [testbed-manager] 2025-05-31 19:44:36.160888 | orchestrator | skipping: [testbed-node-0] 2025-05-31 19:44:36.241144 | orchestrator | skipping: [testbed-node-1] 2025-05-31 19:44:36.325209 | orchestrator | skipping: [testbed-node-2] 2025-05-31 19:44:36.404103 | orchestrator | skipping: [testbed-node-3] 2025-05-31 19:44:36.542959 | orchestrator | skipping: [testbed-node-4] 2025-05-31 19:44:36.543959 | orchestrator | skipping: [testbed-node-5] 2025-05-31 19:44:36.544542 | orchestrator | 2025-05-31 19:44:36.545442 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-05-31 19:44:36.548557 | orchestrator | Saturday 31 May 2025 19:44:36 +0000 (0:00:00.633) 0:00:14.512 ********** 2025-05-31 19:44:38.697991 | orchestrator | ok: [testbed-manager] 2025-05-31 19:44:38.698856 | orchestrator | ok: [testbed-node-1] 2025-05-31 19:44:38.699136 | orchestrator | ok: [testbed-node-2] 2025-05-31 19:44:38.704269 | orchestrator | ok: [testbed-node-0] 2025-05-31 19:44:38.704424 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:44:38.704949 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:44:38.706217 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:44:38.707037 | orchestrator | 2025-05-31 19:44:38.707732 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-05-31 19:44:38.708421 | orchestrator | Saturday 31 May 2025 19:44:38 +0000 (0:00:02.153) 0:00:16.665 ********** 2025-05-31 19:44:38.953621 | orchestrator | skipping: [testbed-node-0] 2025-05-31 19:44:39.035818 | orchestrator | skipping: [testbed-node-1] 2025-05-31 19:44:39.133589 | orchestrator | skipping: [testbed-node-2] 2025-05-31 19:44:39.230611 | orchestrator | skipping: [testbed-node-3] 2025-05-31 19:44:39.593251 | orchestrator | skipping: [testbed-node-4] 2025-05-31 19:44:39.593423 | orchestrator | skipping: [testbed-node-5] 2025-05-31 19:44:39.594737 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-05-31 19:44:39.596032 | orchestrator | 2025-05-31 19:44:39.597457 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-05-31 19:44:39.598749 | orchestrator | Saturday 31 May 2025 19:44:39 +0000 (0:00:00.895) 0:00:17.561 ********** 2025-05-31 19:44:41.348737 | orchestrator | ok: [testbed-manager] 2025-05-31 19:44:41.349631 | orchestrator | changed: [testbed-node-0] 2025-05-31 19:44:41.349658 | orchestrator | changed: [testbed-node-2] 2025-05-31 19:44:41.349763 | orchestrator | changed: [testbed-node-1] 2025-05-31 19:44:41.350839 | orchestrator | changed: [testbed-node-3] 2025-05-31 19:44:41.351646 | orchestrator | changed: [testbed-node-4] 2025-05-31 19:44:41.352556 | orchestrator | changed: [testbed-node-5] 2025-05-31 19:44:41.354275 | orchestrator | 2025-05-31 19:44:41.355401 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-05-31 19:44:41.356339 | orchestrator | Saturday 31 May 2025 19:44:41 +0000 (0:00:01.752) 0:00:19.313 ********** 2025-05-31 19:44:42.573710 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 19:44:42.573815 | orchestrator | 2025-05-31 19:44:42.573936 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-05-31 19:44:42.574430 | orchestrator | Saturday 31 May 2025 19:44:42 +0000 (0:00:01.223) 0:00:20.537 ********** 2025-05-31 19:44:43.508013 | orchestrator | ok: [testbed-manager] 2025-05-31 19:44:43.510225 | orchestrator | ok: [testbed-node-0] 2025-05-31 19:44:43.512111 | orchestrator | ok: [testbed-node-1] 2025-05-31 19:44:43.513622 | orchestrator | ok: [testbed-node-2] 2025-05-31 19:44:43.515454 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:44:43.516696 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:44:43.517586 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:44:43.519729 | orchestrator | 2025-05-31 19:44:43.520550 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-05-31 19:44:43.521200 | orchestrator | Saturday 31 May 2025 19:44:43 +0000 (0:00:00.942) 0:00:21.480 ********** 2025-05-31 19:44:43.845130 | orchestrator | ok: [testbed-manager] 2025-05-31 19:44:43.928210 | orchestrator | ok: [testbed-node-0] 2025-05-31 19:44:44.025148 | orchestrator | ok: [testbed-node-1] 2025-05-31 19:44:44.108405 | orchestrator | ok: [testbed-node-2] 2025-05-31 19:44:44.193912 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:44:44.331935 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:44:44.332922 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:44:44.336569 | orchestrator | 2025-05-31 19:44:44.337840 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-05-31 19:44:44.338783 | orchestrator | Saturday 31 May 2025 19:44:44 +0000 (0:00:00.823) 0:00:22.303 ********** 2025-05-31 19:44:44.761462 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-31 19:44:44.761646 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-05-31 19:44:44.854115 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-31 19:44:44.854652 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-05-31 19:44:44.857460 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-31 19:44:45.535982 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-05-31 19:44:45.536145 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-31 19:44:45.536935 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-05-31 19:44:45.541257 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-31 19:44:45.541802 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-05-31 19:44:45.543920 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-31 19:44:45.544810 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-05-31 19:44:45.545996 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-31 19:44:45.546601 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-05-31 19:44:45.547587 | orchestrator | 2025-05-31 19:44:45.548842 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-05-31 19:44:45.549314 | orchestrator | Saturday 31 May 2025 19:44:45 +0000 (0:00:01.199) 0:00:23.503 ********** 2025-05-31 19:44:45.747406 | orchestrator | skipping: [testbed-manager] 2025-05-31 19:44:45.838532 | orchestrator | skipping: [testbed-node-0] 2025-05-31 19:44:45.929824 | orchestrator | skipping: [testbed-node-1] 2025-05-31 19:44:46.019326 | orchestrator | skipping: [testbed-node-2] 2025-05-31 19:44:46.103389 | orchestrator | skipping: [testbed-node-3] 2025-05-31 19:44:46.223176 | orchestrator | skipping: [testbed-node-4] 2025-05-31 19:44:46.223712 | orchestrator | skipping: [testbed-node-5] 2025-05-31 19:44:46.224875 | orchestrator | 2025-05-31 19:44:46.227875 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2025-05-31 19:44:46.228035 | orchestrator | Saturday 31 May 2025 19:44:46 +0000 (0:00:00.692) 0:00:24.195 ********** 2025-05-31 19:44:49.830252 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 19:44:49.830848 | orchestrator | 2025-05-31 19:44:49.832241 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2025-05-31 19:44:49.832938 | orchestrator | Saturday 31 May 2025 19:44:49 +0000 (0:00:03.602) 0:00:27.798 ********** 2025-05-31 19:44:54.730366 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-05-31 19:44:54.733326 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-05-31 19:44:54.733369 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-05-31 19:44:54.733383 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-05-31 19:44:54.733613 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-05-31 19:44:54.734670 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-05-31 19:44:54.736162 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-05-31 19:44:54.736707 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-05-31 19:44:54.737163 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-05-31 19:44:54.737814 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-05-31 19:44:54.738224 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-05-31 19:44:54.738965 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-05-31 19:44:54.739138 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-05-31 19:44:54.739783 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-05-31 19:44:54.740244 | orchestrator | 2025-05-31 19:44:54.740677 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2025-05-31 19:44:54.741220 | orchestrator | Saturday 31 May 2025 19:44:54 +0000 (0:00:04.898) 0:00:32.697 ********** 2025-05-31 19:44:59.411692 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-05-31 19:44:59.411948 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-05-31 19:44:59.413117 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-05-31 19:44:59.415666 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-05-31 19:44:59.417798 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-05-31 19:44:59.417838 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-05-31 19:44:59.419570 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-05-31 19:44:59.423886 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-05-31 19:44:59.423941 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-05-31 19:44:59.423983 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-05-31 19:44:59.423996 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-05-31 19:44:59.424008 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-05-31 19:44:59.424019 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-05-31 19:44:59.424031 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-05-31 19:44:59.424042 | orchestrator | 2025-05-31 19:44:59.424055 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2025-05-31 19:44:59.424068 | orchestrator | Saturday 31 May 2025 19:44:59 +0000 (0:00:04.683) 0:00:37.381 ********** 2025-05-31 19:45:00.651991 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 19:45:00.652093 | orchestrator | 2025-05-31 19:45:00.652841 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-05-31 19:45:00.657622 | orchestrator | Saturday 31 May 2025 19:45:00 +0000 (0:00:01.238) 0:00:38.620 ********** 2025-05-31 19:45:01.116185 | orchestrator | ok: [testbed-manager] 2025-05-31 19:45:01.856151 | orchestrator | ok: [testbed-node-0] 2025-05-31 19:45:01.856256 | orchestrator | ok: [testbed-node-1] 2025-05-31 19:45:01.857213 | orchestrator | ok: [testbed-node-2] 2025-05-31 19:45:01.860754 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:45:01.861157 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:45:01.861811 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:45:01.862885 | orchestrator | 2025-05-31 19:45:01.863716 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-05-31 19:45:01.864341 | orchestrator | Saturday 31 May 2025 19:45:01 +0000 (0:00:01.206) 0:00:39.826 ********** 2025-05-31 19:45:01.948470 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2025-05-31 19:45:01.948732 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-05-31 19:45:01.948825 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2025-05-31 19:45:02.041317 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-05-31 19:45:02.041683 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2025-05-31 19:45:02.041960 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-05-31 19:45:02.042995 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2025-05-31 19:45:02.043023 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-05-31 19:45:02.144797 | orchestrator | skipping: [testbed-manager] 2025-05-31 19:45:02.144925 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2025-05-31 19:45:02.145065 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-05-31 19:45:02.145707 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2025-05-31 19:45:02.146557 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-05-31 19:45:02.236089 | orchestrator | skipping: [testbed-node-0] 2025-05-31 19:45:02.237240 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2025-05-31 19:45:02.237926 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-05-31 19:45:02.239048 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2025-05-31 19:45:02.240013 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-05-31 19:45:02.320904 | orchestrator | skipping: [testbed-node-1] 2025-05-31 19:45:02.321558 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2025-05-31 19:45:02.322387 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-05-31 19:45:02.325883 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2025-05-31 19:45:02.419722 | orchestrator | skipping: [testbed-node-2] 2025-05-31 19:45:02.420243 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-05-31 19:45:02.420803 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2025-05-31 19:45:02.424984 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-05-31 19:45:02.425017 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2025-05-31 19:45:02.425030 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-05-31 19:45:03.794349 | orchestrator | skipping: [testbed-node-3] 2025-05-31 19:45:03.794524 | orchestrator | skipping: [testbed-node-4] 2025-05-31 19:45:03.795539 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2025-05-31 19:45:03.796453 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-05-31 19:45:03.800782 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2025-05-31 19:45:03.801373 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-05-31 19:45:03.801792 | orchestrator | skipping: [testbed-node-5] 2025-05-31 19:45:03.802383 | orchestrator | 2025-05-31 19:45:03.803199 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2025-05-31 19:45:03.804271 | orchestrator | Saturday 31 May 2025 19:45:03 +0000 (0:00:01.936) 0:00:41.763 ********** 2025-05-31 19:45:03.957824 | orchestrator | skipping: [testbed-manager] 2025-05-31 19:45:04.037573 | orchestrator | skipping: [testbed-node-0] 2025-05-31 19:45:04.119779 | orchestrator | skipping: [testbed-node-1] 2025-05-31 19:45:04.202077 | orchestrator | skipping: [testbed-node-2] 2025-05-31 19:45:04.279349 | orchestrator | skipping: [testbed-node-3] 2025-05-31 19:45:04.387667 | orchestrator | skipping: [testbed-node-4] 2025-05-31 19:45:04.388554 | orchestrator | skipping: [testbed-node-5] 2025-05-31 19:45:04.388847 | orchestrator | 2025-05-31 19:45:04.389646 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-05-31 19:45:04.390747 | orchestrator | Saturday 31 May 2025 19:45:04 +0000 (0:00:00.597) 0:00:42.360 ********** 2025-05-31 19:45:04.544758 | orchestrator | skipping: [testbed-manager] 2025-05-31 19:45:04.624361 | orchestrator | skipping: [testbed-node-0] 2025-05-31 19:45:04.865354 | orchestrator | skipping: [testbed-node-1] 2025-05-31 19:45:04.948539 | orchestrator | skipping: [testbed-node-2] 2025-05-31 19:45:05.031228 | orchestrator | skipping: [testbed-node-3] 2025-05-31 19:45:05.061572 | orchestrator | skipping: [testbed-node-4] 2025-05-31 19:45:05.061811 | orchestrator | skipping: [testbed-node-5] 2025-05-31 19:45:05.063078 | orchestrator | 2025-05-31 19:45:05.063100 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 19:45:05.063122 | orchestrator | 2025-05-31 19:45:05 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-31 19:45:05.063794 | orchestrator | 2025-05-31 19:45:05 | INFO  | Please wait and do not abort execution. 2025-05-31 19:45:05.064320 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-31 19:45:05.064914 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-31 19:45:05.065548 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-31 19:45:05.066088 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-31 19:45:05.066423 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-31 19:45:05.066984 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-31 19:45:05.067377 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-31 19:45:05.067903 | orchestrator | 2025-05-31 19:45:05.069321 | orchestrator | 2025-05-31 19:45:05.069552 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-31 19:45:05.069568 | orchestrator | Saturday 31 May 2025 19:45:05 +0000 (0:00:00.674) 0:00:43.035 ********** 2025-05-31 19:45:05.069575 | orchestrator | =============================================================================== 2025-05-31 19:45:05.069972 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 4.90s 2025-05-31 19:45:05.070437 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 4.68s 2025-05-31 19:45:05.070881 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 3.60s 2025-05-31 19:45:05.071391 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.12s 2025-05-31 19:45:05.071985 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.15s 2025-05-31 19:45:05.072713 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.94s 2025-05-31 19:45:05.072922 | orchestrator | osism.commons.network : Install required packages ----------------------- 1.89s 2025-05-31 19:45:05.073443 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.75s 2025-05-31 19:45:05.073878 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.67s 2025-05-31 19:45:05.074308 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.66s 2025-05-31 19:45:05.074703 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.42s 2025-05-31 19:45:05.074992 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.24s 2025-05-31 19:45:05.075417 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.22s 2025-05-31 19:45:05.075942 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.21s 2025-05-31 19:45:05.076257 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.20s 2025-05-31 19:45:05.076799 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.15s 2025-05-31 19:45:05.077063 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.05s 2025-05-31 19:45:05.077632 | orchestrator | osism.commons.network : Create required directories --------------------- 0.96s 2025-05-31 19:45:05.077907 | orchestrator | osism.commons.network : List existing configuration files --------------- 0.94s 2025-05-31 19:45:05.078297 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.90s 2025-05-31 19:45:05.655088 | orchestrator | + osism apply wireguard 2025-05-31 19:45:07.308836 | orchestrator | Registering Redlock._acquired_script 2025-05-31 19:45:07.308958 | orchestrator | Registering Redlock._extend_script 2025-05-31 19:45:07.308983 | orchestrator | Registering Redlock._release_script 2025-05-31 19:45:07.365806 | orchestrator | 2025-05-31 19:45:07 | INFO  | Task 1b53d4e5-56f7-4768-a06b-3385eeb37a53 (wireguard) was prepared for execution. 2025-05-31 19:45:07.365912 | orchestrator | 2025-05-31 19:45:07 | INFO  | It takes a moment until task 1b53d4e5-56f7-4768-a06b-3385eeb37a53 (wireguard) has been started and output is visible here. 2025-05-31 19:45:11.386929 | orchestrator | 2025-05-31 19:45:11.387526 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-05-31 19:45:11.389220 | orchestrator | 2025-05-31 19:45:11.389452 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-05-31 19:45:11.390011 | orchestrator | Saturday 31 May 2025 19:45:11 +0000 (0:00:00.233) 0:00:00.233 ********** 2025-05-31 19:45:12.826280 | orchestrator | ok: [testbed-manager] 2025-05-31 19:45:12.827310 | orchestrator | 2025-05-31 19:45:12.828432 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-05-31 19:45:12.830848 | orchestrator | Saturday 31 May 2025 19:45:12 +0000 (0:00:01.439) 0:00:01.672 ********** 2025-05-31 19:45:18.912039 | orchestrator | changed: [testbed-manager] 2025-05-31 19:45:18.912156 | orchestrator | 2025-05-31 19:45:18.913093 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-05-31 19:45:18.913798 | orchestrator | Saturday 31 May 2025 19:45:18 +0000 (0:00:06.086) 0:00:07.758 ********** 2025-05-31 19:45:19.456011 | orchestrator | changed: [testbed-manager] 2025-05-31 19:45:19.457765 | orchestrator | 2025-05-31 19:45:19.458814 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-05-31 19:45:19.459686 | orchestrator | Saturday 31 May 2025 19:45:19 +0000 (0:00:00.543) 0:00:08.301 ********** 2025-05-31 19:45:19.917905 | orchestrator | changed: [testbed-manager] 2025-05-31 19:45:19.918249 | orchestrator | 2025-05-31 19:45:19.919158 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-05-31 19:45:19.919817 | orchestrator | Saturday 31 May 2025 19:45:19 +0000 (0:00:00.464) 0:00:08.765 ********** 2025-05-31 19:45:20.439690 | orchestrator | ok: [testbed-manager] 2025-05-31 19:45:20.440425 | orchestrator | 2025-05-31 19:45:20.441396 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-05-31 19:45:20.441856 | orchestrator | Saturday 31 May 2025 19:45:20 +0000 (0:00:00.520) 0:00:09.286 ********** 2025-05-31 19:45:20.968766 | orchestrator | ok: [testbed-manager] 2025-05-31 19:45:20.969058 | orchestrator | 2025-05-31 19:45:20.969520 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-05-31 19:45:20.969913 | orchestrator | Saturday 31 May 2025 19:45:20 +0000 (0:00:00.531) 0:00:09.817 ********** 2025-05-31 19:45:21.394392 | orchestrator | ok: [testbed-manager] 2025-05-31 19:45:21.394599 | orchestrator | 2025-05-31 19:45:21.395305 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-05-31 19:45:21.395985 | orchestrator | Saturday 31 May 2025 19:45:21 +0000 (0:00:00.423) 0:00:10.241 ********** 2025-05-31 19:45:22.532923 | orchestrator | changed: [testbed-manager] 2025-05-31 19:45:22.533465 | orchestrator | 2025-05-31 19:45:22.534076 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-05-31 19:45:22.536084 | orchestrator | Saturday 31 May 2025 19:45:22 +0000 (0:00:01.137) 0:00:11.378 ********** 2025-05-31 19:45:23.413146 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-31 19:45:23.413596 | orchestrator | changed: [testbed-manager] 2025-05-31 19:45:23.416566 | orchestrator | 2025-05-31 19:45:23.417150 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-05-31 19:45:23.417866 | orchestrator | Saturday 31 May 2025 19:45:23 +0000 (0:00:00.881) 0:00:12.259 ********** 2025-05-31 19:45:25.069878 | orchestrator | changed: [testbed-manager] 2025-05-31 19:45:25.071567 | orchestrator | 2025-05-31 19:45:25.071683 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-05-31 19:45:25.072053 | orchestrator | Saturday 31 May 2025 19:45:25 +0000 (0:00:01.657) 0:00:13.917 ********** 2025-05-31 19:45:26.971460 | orchestrator | changed: [testbed-manager] 2025-05-31 19:45:26.971765 | orchestrator | 2025-05-31 19:45:26.972048 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 19:45:26.972748 | orchestrator | 2025-05-31 19:45:26 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-31 19:45:26.972773 | orchestrator | 2025-05-31 19:45:26 | INFO  | Please wait and do not abort execution. 2025-05-31 19:45:26.973373 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 19:45:26.974473 | orchestrator | 2025-05-31 19:45:26.975957 | orchestrator | 2025-05-31 19:45:26.976336 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-31 19:45:26.977283 | orchestrator | Saturday 31 May 2025 19:45:26 +0000 (0:00:01.901) 0:00:15.819 ********** 2025-05-31 19:45:26.977807 | orchestrator | =============================================================================== 2025-05-31 19:45:26.978398 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.09s 2025-05-31 19:45:26.979180 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 1.90s 2025-05-31 19:45:26.979704 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.66s 2025-05-31 19:45:26.980904 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.44s 2025-05-31 19:45:26.981296 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.14s 2025-05-31 19:45:26.981827 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.88s 2025-05-31 19:45:26.982325 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.54s 2025-05-31 19:45:26.982789 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.53s 2025-05-31 19:45:26.983253 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.52s 2025-05-31 19:45:26.983808 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.46s 2025-05-31 19:45:26.984245 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.42s 2025-05-31 19:45:27.522914 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-05-31 19:45:27.560559 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-05-31 19:45:27.560624 | orchestrator | Dload Upload Total Spent Left Speed 2025-05-31 19:45:27.642893 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 14 100 14 0 0 169 0 --:--:-- --:--:-- --:--:-- 168 2025-05-31 19:45:27.655972 | orchestrator | + osism apply --environment custom workarounds 2025-05-31 19:45:29.344930 | orchestrator | 2025-05-31 19:45:29 | INFO  | Trying to run play workarounds in environment custom 2025-05-31 19:45:29.350158 | orchestrator | Registering Redlock._acquired_script 2025-05-31 19:45:29.350221 | orchestrator | Registering Redlock._extend_script 2025-05-31 19:45:29.350236 | orchestrator | Registering Redlock._release_script 2025-05-31 19:45:29.407530 | orchestrator | 2025-05-31 19:45:29 | INFO  | Task 9a092d36-b767-4cd4-96ae-a10daeb6235b (workarounds) was prepared for execution. 2025-05-31 19:45:29.407601 | orchestrator | 2025-05-31 19:45:29 | INFO  | It takes a moment until task 9a092d36-b767-4cd4-96ae-a10daeb6235b (workarounds) has been started and output is visible here. 2025-05-31 19:45:33.271288 | orchestrator | 2025-05-31 19:45:33.271635 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-31 19:45:33.272783 | orchestrator | 2025-05-31 19:45:33.275277 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-05-31 19:45:33.276094 | orchestrator | Saturday 31 May 2025 19:45:33 +0000 (0:00:00.142) 0:00:00.142 ********** 2025-05-31 19:45:33.433327 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-05-31 19:45:33.514803 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-05-31 19:45:33.598323 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-05-31 19:45:33.679167 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-05-31 19:45:33.841078 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-05-31 19:45:33.970250 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-05-31 19:45:33.970671 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-05-31 19:45:33.971745 | orchestrator | 2025-05-31 19:45:33.972324 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-05-31 19:45:33.973017 | orchestrator | 2025-05-31 19:45:33.973538 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-05-31 19:45:33.974091 | orchestrator | Saturday 31 May 2025 19:45:33 +0000 (0:00:00.703) 0:00:00.846 ********** 2025-05-31 19:45:36.141285 | orchestrator | ok: [testbed-manager] 2025-05-31 19:45:36.141949 | orchestrator | 2025-05-31 19:45:36.142331 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-05-31 19:45:36.143044 | orchestrator | 2025-05-31 19:45:36.145606 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-05-31 19:45:36.146134 | orchestrator | Saturday 31 May 2025 19:45:36 +0000 (0:00:02.166) 0:00:03.012 ********** 2025-05-31 19:45:37.892063 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:45:37.892170 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:45:37.892937 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:45:37.897068 | orchestrator | ok: [testbed-node-1] 2025-05-31 19:45:37.897105 | orchestrator | ok: [testbed-node-0] 2025-05-31 19:45:37.897118 | orchestrator | ok: [testbed-node-2] 2025-05-31 19:45:37.897130 | orchestrator | 2025-05-31 19:45:37.897186 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-05-31 19:45:37.897926 | orchestrator | 2025-05-31 19:45:37.898701 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-05-31 19:45:37.899509 | orchestrator | Saturday 31 May 2025 19:45:37 +0000 (0:00:01.751) 0:00:04.763 ********** 2025-05-31 19:45:39.365387 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-31 19:45:39.365544 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-31 19:45:39.366113 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-31 19:45:39.367920 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-31 19:45:39.370137 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-31 19:45:39.370193 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-31 19:45:39.370791 | orchestrator | 2025-05-31 19:45:39.371681 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-05-31 19:45:39.372269 | orchestrator | Saturday 31 May 2025 19:45:39 +0000 (0:00:01.470) 0:00:06.234 ********** 2025-05-31 19:45:43.143596 | orchestrator | changed: [testbed-node-3] 2025-05-31 19:45:43.143709 | orchestrator | changed: [testbed-node-4] 2025-05-31 19:45:43.144812 | orchestrator | changed: [testbed-node-5] 2025-05-31 19:45:43.146574 | orchestrator | changed: [testbed-node-2] 2025-05-31 19:45:43.146998 | orchestrator | changed: [testbed-node-0] 2025-05-31 19:45:43.147774 | orchestrator | changed: [testbed-node-1] 2025-05-31 19:45:43.148739 | orchestrator | 2025-05-31 19:45:43.148978 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-05-31 19:45:43.149653 | orchestrator | Saturday 31 May 2025 19:45:43 +0000 (0:00:03.779) 0:00:10.014 ********** 2025-05-31 19:45:43.300830 | orchestrator | skipping: [testbed-node-3] 2025-05-31 19:45:43.379453 | orchestrator | skipping: [testbed-node-4] 2025-05-31 19:45:43.456851 | orchestrator | skipping: [testbed-node-5] 2025-05-31 19:45:43.531274 | orchestrator | skipping: [testbed-node-0] 2025-05-31 19:45:43.832944 | orchestrator | skipping: [testbed-node-1] 2025-05-31 19:45:43.833860 | orchestrator | skipping: [testbed-node-2] 2025-05-31 19:45:43.834155 | orchestrator | 2025-05-31 19:45:43.834864 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-05-31 19:45:43.838300 | orchestrator | 2025-05-31 19:45:43.838323 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-05-31 19:45:43.838336 | orchestrator | Saturday 31 May 2025 19:45:43 +0000 (0:00:00.691) 0:00:10.705 ********** 2025-05-31 19:45:45.478844 | orchestrator | changed: [testbed-manager] 2025-05-31 19:45:45.479316 | orchestrator | changed: [testbed-node-3] 2025-05-31 19:45:45.480407 | orchestrator | changed: [testbed-node-4] 2025-05-31 19:45:45.483386 | orchestrator | changed: [testbed-node-5] 2025-05-31 19:45:45.484203 | orchestrator | changed: [testbed-node-0] 2025-05-31 19:45:45.484803 | orchestrator | changed: [testbed-node-1] 2025-05-31 19:45:45.485214 | orchestrator | changed: [testbed-node-2] 2025-05-31 19:45:45.485888 | orchestrator | 2025-05-31 19:45:45.486375 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-05-31 19:45:45.487372 | orchestrator | Saturday 31 May 2025 19:45:45 +0000 (0:00:01.645) 0:00:12.351 ********** 2025-05-31 19:45:47.075715 | orchestrator | changed: [testbed-manager] 2025-05-31 19:45:47.076209 | orchestrator | changed: [testbed-node-3] 2025-05-31 19:45:47.077750 | orchestrator | changed: [testbed-node-4] 2025-05-31 19:45:47.078424 | orchestrator | changed: [testbed-node-5] 2025-05-31 19:45:47.079320 | orchestrator | changed: [testbed-node-0] 2025-05-31 19:45:47.080019 | orchestrator | changed: [testbed-node-1] 2025-05-31 19:45:47.081270 | orchestrator | changed: [testbed-node-2] 2025-05-31 19:45:47.081632 | orchestrator | 2025-05-31 19:45:47.082891 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-05-31 19:45:47.083400 | orchestrator | Saturday 31 May 2025 19:45:47 +0000 (0:00:01.593) 0:00:13.945 ********** 2025-05-31 19:45:48.531367 | orchestrator | ok: [testbed-manager] 2025-05-31 19:45:48.533052 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:45:48.533822 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:45:48.535534 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:45:48.536639 | orchestrator | ok: [testbed-node-0] 2025-05-31 19:45:48.537735 | orchestrator | ok: [testbed-node-1] 2025-05-31 19:45:48.538424 | orchestrator | ok: [testbed-node-2] 2025-05-31 19:45:48.538975 | orchestrator | 2025-05-31 19:45:48.539821 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-05-31 19:45:48.540270 | orchestrator | Saturday 31 May 2025 19:45:48 +0000 (0:00:01.459) 0:00:15.405 ********** 2025-05-31 19:45:50.294454 | orchestrator | changed: [testbed-manager] 2025-05-31 19:45:50.295714 | orchestrator | changed: [testbed-node-3] 2025-05-31 19:45:50.298779 | orchestrator | changed: [testbed-node-4] 2025-05-31 19:45:50.299300 | orchestrator | changed: [testbed-node-5] 2025-05-31 19:45:50.301366 | orchestrator | changed: [testbed-node-0] 2025-05-31 19:45:50.301844 | orchestrator | changed: [testbed-node-1] 2025-05-31 19:45:50.303165 | orchestrator | changed: [testbed-node-2] 2025-05-31 19:45:50.304009 | orchestrator | 2025-05-31 19:45:50.305296 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-05-31 19:45:50.306091 | orchestrator | Saturday 31 May 2025 19:45:50 +0000 (0:00:01.757) 0:00:17.163 ********** 2025-05-31 19:45:50.454537 | orchestrator | skipping: [testbed-manager] 2025-05-31 19:45:50.532758 | orchestrator | skipping: [testbed-node-3] 2025-05-31 19:45:50.609867 | orchestrator | skipping: [testbed-node-4] 2025-05-31 19:45:50.682398 | orchestrator | skipping: [testbed-node-5] 2025-05-31 19:45:50.758632 | orchestrator | skipping: [testbed-node-0] 2025-05-31 19:45:50.881230 | orchestrator | skipping: [testbed-node-1] 2025-05-31 19:45:50.882118 | orchestrator | skipping: [testbed-node-2] 2025-05-31 19:45:50.886131 | orchestrator | 2025-05-31 19:45:50.886176 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-05-31 19:45:50.886191 | orchestrator | 2025-05-31 19:45:50.886203 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-05-31 19:45:50.887197 | orchestrator | Saturday 31 May 2025 19:45:50 +0000 (0:00:00.590) 0:00:17.753 ********** 2025-05-31 19:45:54.078262 | orchestrator | ok: [testbed-manager] 2025-05-31 19:45:54.078557 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:45:54.079954 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:45:54.080057 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:45:54.080446 | orchestrator | ok: [testbed-node-1] 2025-05-31 19:45:54.081202 | orchestrator | ok: [testbed-node-0] 2025-05-31 19:45:54.083193 | orchestrator | ok: [testbed-node-2] 2025-05-31 19:45:54.083920 | orchestrator | 2025-05-31 19:45:54.084848 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 19:45:54.085448 | orchestrator | 2025-05-31 19:45:54 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-31 19:45:54.085772 | orchestrator | 2025-05-31 19:45:54 | INFO  | Please wait and do not abort execution. 2025-05-31 19:45:54.086962 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-31 19:45:54.088261 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-31 19:45:54.088867 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-31 19:45:54.089329 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-31 19:45:54.090394 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-31 19:45:54.090932 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-31 19:45:54.091716 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-31 19:45:54.091965 | orchestrator | 2025-05-31 19:45:54.092731 | orchestrator | 2025-05-31 19:45:54.093090 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-31 19:45:54.093801 | orchestrator | Saturday 31 May 2025 19:45:54 +0000 (0:00:03.197) 0:00:20.951 ********** 2025-05-31 19:45:54.094475 | orchestrator | =============================================================================== 2025-05-31 19:45:54.094992 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.78s 2025-05-31 19:45:54.095568 | orchestrator | Install python3-docker -------------------------------------------------- 3.20s 2025-05-31 19:45:54.095870 | orchestrator | Apply netplan configuration --------------------------------------------- 2.17s 2025-05-31 19:45:54.096458 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.76s 2025-05-31 19:45:54.097236 | orchestrator | Apply netplan configuration --------------------------------------------- 1.75s 2025-05-31 19:45:54.097550 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.65s 2025-05-31 19:45:54.097948 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.59s 2025-05-31 19:45:54.098664 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.47s 2025-05-31 19:45:54.099318 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.46s 2025-05-31 19:45:54.099642 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.70s 2025-05-31 19:45:54.100054 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.69s 2025-05-31 19:45:54.100524 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.59s 2025-05-31 19:45:54.653947 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-05-31 19:45:56.365766 | orchestrator | Registering Redlock._acquired_script 2025-05-31 19:45:56.365853 | orchestrator | Registering Redlock._extend_script 2025-05-31 19:45:56.365866 | orchestrator | Registering Redlock._release_script 2025-05-31 19:45:56.424620 | orchestrator | 2025-05-31 19:45:56 | INFO  | Task 1118fb60-a6c7-425a-800e-36e4369be71e (reboot) was prepared for execution. 2025-05-31 19:45:56.424706 | orchestrator | 2025-05-31 19:45:56 | INFO  | It takes a moment until task 1118fb60-a6c7-425a-800e-36e4369be71e (reboot) has been started and output is visible here. 2025-05-31 19:46:00.484248 | orchestrator | 2025-05-31 19:46:00.485624 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-31 19:46:00.486795 | orchestrator | 2025-05-31 19:46:00.488303 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-31 19:46:00.489230 | orchestrator | Saturday 31 May 2025 19:46:00 +0000 (0:00:00.206) 0:00:00.206 ********** 2025-05-31 19:46:00.579081 | orchestrator | skipping: [testbed-node-0] 2025-05-31 19:46:00.579174 | orchestrator | 2025-05-31 19:46:00.579943 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-31 19:46:00.580209 | orchestrator | Saturday 31 May 2025 19:46:00 +0000 (0:00:00.094) 0:00:00.301 ********** 2025-05-31 19:46:01.527534 | orchestrator | changed: [testbed-node-0] 2025-05-31 19:46:01.527794 | orchestrator | 2025-05-31 19:46:01.528792 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-31 19:46:01.529343 | orchestrator | Saturday 31 May 2025 19:46:01 +0000 (0:00:00.949) 0:00:01.251 ********** 2025-05-31 19:46:01.661531 | orchestrator | skipping: [testbed-node-0] 2025-05-31 19:46:01.662001 | orchestrator | 2025-05-31 19:46:01.662796 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-31 19:46:01.664642 | orchestrator | 2025-05-31 19:46:01.664665 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-31 19:46:01.664964 | orchestrator | Saturday 31 May 2025 19:46:01 +0000 (0:00:00.135) 0:00:01.387 ********** 2025-05-31 19:46:01.762327 | orchestrator | skipping: [testbed-node-1] 2025-05-31 19:46:01.763229 | orchestrator | 2025-05-31 19:46:01.763679 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-31 19:46:01.764327 | orchestrator | Saturday 31 May 2025 19:46:01 +0000 (0:00:00.100) 0:00:01.488 ********** 2025-05-31 19:46:02.423597 | orchestrator | changed: [testbed-node-1] 2025-05-31 19:46:02.424347 | orchestrator | 2025-05-31 19:46:02.425132 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-31 19:46:02.425871 | orchestrator | Saturday 31 May 2025 19:46:02 +0000 (0:00:00.660) 0:00:02.148 ********** 2025-05-31 19:46:02.545941 | orchestrator | skipping: [testbed-node-1] 2025-05-31 19:46:02.547018 | orchestrator | 2025-05-31 19:46:02.547213 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-31 19:46:02.548805 | orchestrator | 2025-05-31 19:46:02.549911 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-31 19:46:02.550858 | orchestrator | Saturday 31 May 2025 19:46:02 +0000 (0:00:00.121) 0:00:02.270 ********** 2025-05-31 19:46:02.746594 | orchestrator | skipping: [testbed-node-2] 2025-05-31 19:46:02.746770 | orchestrator | 2025-05-31 19:46:02.747874 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-31 19:46:02.748413 | orchestrator | Saturday 31 May 2025 19:46:02 +0000 (0:00:00.201) 0:00:02.471 ********** 2025-05-31 19:46:03.415807 | orchestrator | changed: [testbed-node-2] 2025-05-31 19:46:03.416269 | orchestrator | 2025-05-31 19:46:03.417231 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-31 19:46:03.418124 | orchestrator | Saturday 31 May 2025 19:46:03 +0000 (0:00:00.669) 0:00:03.141 ********** 2025-05-31 19:46:03.523686 | orchestrator | skipping: [testbed-node-2] 2025-05-31 19:46:03.524186 | orchestrator | 2025-05-31 19:46:03.525177 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-31 19:46:03.526710 | orchestrator | 2025-05-31 19:46:03.527190 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-31 19:46:03.527746 | orchestrator | Saturday 31 May 2025 19:46:03 +0000 (0:00:00.105) 0:00:03.247 ********** 2025-05-31 19:46:03.617689 | orchestrator | skipping: [testbed-node-3] 2025-05-31 19:46:03.617784 | orchestrator | 2025-05-31 19:46:03.618645 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-31 19:46:03.619059 | orchestrator | Saturday 31 May 2025 19:46:03 +0000 (0:00:00.096) 0:00:03.343 ********** 2025-05-31 19:46:04.276010 | orchestrator | changed: [testbed-node-3] 2025-05-31 19:46:04.276112 | orchestrator | 2025-05-31 19:46:04.277363 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-31 19:46:04.278072 | orchestrator | Saturday 31 May 2025 19:46:04 +0000 (0:00:00.657) 0:00:04.000 ********** 2025-05-31 19:46:04.418428 | orchestrator | skipping: [testbed-node-3] 2025-05-31 19:46:04.418565 | orchestrator | 2025-05-31 19:46:04.418645 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-31 19:46:04.419053 | orchestrator | 2025-05-31 19:46:04.420152 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-31 19:46:04.420687 | orchestrator | Saturday 31 May 2025 19:46:04 +0000 (0:00:00.142) 0:00:04.142 ********** 2025-05-31 19:46:04.510288 | orchestrator | skipping: [testbed-node-4] 2025-05-31 19:46:04.510437 | orchestrator | 2025-05-31 19:46:04.510456 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-31 19:46:04.510567 | orchestrator | Saturday 31 May 2025 19:46:04 +0000 (0:00:00.092) 0:00:04.235 ********** 2025-05-31 19:46:05.177008 | orchestrator | changed: [testbed-node-4] 2025-05-31 19:46:05.177177 | orchestrator | 2025-05-31 19:46:05.178579 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-31 19:46:05.180839 | orchestrator | Saturday 31 May 2025 19:46:05 +0000 (0:00:00.665) 0:00:04.900 ********** 2025-05-31 19:46:05.303702 | orchestrator | skipping: [testbed-node-4] 2025-05-31 19:46:05.304153 | orchestrator | 2025-05-31 19:46:05.305243 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-31 19:46:05.306623 | orchestrator | 2025-05-31 19:46:05.307240 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-31 19:46:05.309841 | orchestrator | Saturday 31 May 2025 19:46:05 +0000 (0:00:00.126) 0:00:05.027 ********** 2025-05-31 19:46:05.395667 | orchestrator | skipping: [testbed-node-5] 2025-05-31 19:46:05.396010 | orchestrator | 2025-05-31 19:46:05.397179 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-31 19:46:05.398009 | orchestrator | Saturday 31 May 2025 19:46:05 +0000 (0:00:00.094) 0:00:05.121 ********** 2025-05-31 19:46:06.053246 | orchestrator | changed: [testbed-node-5] 2025-05-31 19:46:06.053891 | orchestrator | 2025-05-31 19:46:06.054978 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-31 19:46:06.056191 | orchestrator | Saturday 31 May 2025 19:46:06 +0000 (0:00:00.656) 0:00:05.777 ********** 2025-05-31 19:46:06.085397 | orchestrator | skipping: [testbed-node-5] 2025-05-31 19:46:06.085634 | orchestrator | 2025-05-31 19:46:06.086095 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 19:46:06.086622 | orchestrator | 2025-05-31 19:46:06 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-31 19:46:06.086906 | orchestrator | 2025-05-31 19:46:06 | INFO  | Please wait and do not abort execution. 2025-05-31 19:46:06.088153 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-31 19:46:06.088888 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-31 19:46:06.089421 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-31 19:46:06.090014 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-31 19:46:06.090446 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-31 19:46:06.091011 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-31 19:46:06.091609 | orchestrator | 2025-05-31 19:46:06.091926 | orchestrator | 2025-05-31 19:46:06.092430 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-31 19:46:06.093618 | orchestrator | Saturday 31 May 2025 19:46:06 +0000 (0:00:00.035) 0:00:05.812 ********** 2025-05-31 19:46:06.094225 | orchestrator | =============================================================================== 2025-05-31 19:46:06.095146 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.26s 2025-05-31 19:46:06.095763 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.68s 2025-05-31 19:46:06.096470 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.67s 2025-05-31 19:46:06.609415 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-05-31 19:46:08.287318 | orchestrator | Registering Redlock._acquired_script 2025-05-31 19:46:08.287425 | orchestrator | Registering Redlock._extend_script 2025-05-31 19:46:08.287440 | orchestrator | Registering Redlock._release_script 2025-05-31 19:46:08.344802 | orchestrator | 2025-05-31 19:46:08 | INFO  | Task d8005a7a-7890-4ae0-b733-ffe79cfefb62 (wait-for-connection) was prepared for execution. 2025-05-31 19:46:08.344897 | orchestrator | 2025-05-31 19:46:08 | INFO  | It takes a moment until task d8005a7a-7890-4ae0-b733-ffe79cfefb62 (wait-for-connection) has been started and output is visible here. 2025-05-31 19:46:12.372598 | orchestrator | 2025-05-31 19:46:12.373516 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-05-31 19:46:12.376042 | orchestrator | 2025-05-31 19:46:12.376069 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-05-31 19:46:12.376261 | orchestrator | Saturday 31 May 2025 19:46:12 +0000 (0:00:00.226) 0:00:00.226 ********** 2025-05-31 19:46:24.164805 | orchestrator | ok: [testbed-node-1] 2025-05-31 19:46:24.164928 | orchestrator | ok: [testbed-node-2] 2025-05-31 19:46:24.164974 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:46:24.164995 | orchestrator | ok: [testbed-node-0] 2025-05-31 19:46:24.165013 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:46:24.165128 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:46:24.165937 | orchestrator | 2025-05-31 19:46:24.166218 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 19:46:24.166258 | orchestrator | 2025-05-31 19:46:24 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-31 19:46:24.166270 | orchestrator | 2025-05-31 19:46:24 | INFO  | Please wait and do not abort execution. 2025-05-31 19:46:24.167847 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 19:46:24.168436 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 19:46:24.168816 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 19:46:24.169092 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 19:46:24.169568 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 19:46:24.170539 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 19:46:24.170712 | orchestrator | 2025-05-31 19:46:24.171109 | orchestrator | 2025-05-31 19:46:24.171737 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-31 19:46:24.171994 | orchestrator | Saturday 31 May 2025 19:46:24 +0000 (0:00:11.792) 0:00:12.018 ********** 2025-05-31 19:46:24.172316 | orchestrator | =============================================================================== 2025-05-31 19:46:24.172871 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.79s 2025-05-31 19:46:24.811648 | orchestrator | + osism apply hddtemp 2025-05-31 19:46:26.437705 | orchestrator | Registering Redlock._acquired_script 2025-05-31 19:46:26.437827 | orchestrator | Registering Redlock._extend_script 2025-05-31 19:46:26.437844 | orchestrator | Registering Redlock._release_script 2025-05-31 19:46:26.493665 | orchestrator | 2025-05-31 19:46:26 | INFO  | Task 9a871967-def6-4fa5-beb5-df2e14cdd497 (hddtemp) was prepared for execution. 2025-05-31 19:46:26.493772 | orchestrator | 2025-05-31 19:46:26 | INFO  | It takes a moment until task 9a871967-def6-4fa5-beb5-df2e14cdd497 (hddtemp) has been started and output is visible here. 2025-05-31 19:46:30.445258 | orchestrator | 2025-05-31 19:46:30.445373 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-05-31 19:46:30.445654 | orchestrator | 2025-05-31 19:46:30.449214 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-05-31 19:46:30.449720 | orchestrator | Saturday 31 May 2025 19:46:30 +0000 (0:00:00.230) 0:00:00.230 ********** 2025-05-31 19:46:30.590965 | orchestrator | ok: [testbed-manager] 2025-05-31 19:46:30.660150 | orchestrator | ok: [testbed-node-0] 2025-05-31 19:46:30.730649 | orchestrator | ok: [testbed-node-1] 2025-05-31 19:46:30.797646 | orchestrator | ok: [testbed-node-2] 2025-05-31 19:46:30.932292 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:46:31.041805 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:46:31.042106 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:46:31.045191 | orchestrator | 2025-05-31 19:46:31.046258 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-05-31 19:46:31.046948 | orchestrator | Saturday 31 May 2025 19:46:31 +0000 (0:00:00.595) 0:00:00.826 ********** 2025-05-31 19:46:32.048614 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 19:46:32.052177 | orchestrator | 2025-05-31 19:46:32.052407 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-05-31 19:46:32.053156 | orchestrator | Saturday 31 May 2025 19:46:32 +0000 (0:00:01.005) 0:00:01.832 ********** 2025-05-31 19:46:33.905043 | orchestrator | ok: [testbed-manager] 2025-05-31 19:46:33.905268 | orchestrator | ok: [testbed-node-1] 2025-05-31 19:46:33.906119 | orchestrator | ok: [testbed-node-2] 2025-05-31 19:46:33.906967 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:46:33.908041 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:46:33.909651 | orchestrator | ok: [testbed-node-0] 2025-05-31 19:46:33.910370 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:46:33.911602 | orchestrator | 2025-05-31 19:46:33.912521 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-05-31 19:46:33.913117 | orchestrator | Saturday 31 May 2025 19:46:33 +0000 (0:00:01.857) 0:00:03.690 ********** 2025-05-31 19:46:34.398780 | orchestrator | changed: [testbed-manager] 2025-05-31 19:46:34.496056 | orchestrator | changed: [testbed-node-0] 2025-05-31 19:46:34.913936 | orchestrator | changed: [testbed-node-1] 2025-05-31 19:46:34.914287 | orchestrator | changed: [testbed-node-2] 2025-05-31 19:46:34.915350 | orchestrator | changed: [testbed-node-3] 2025-05-31 19:46:34.917284 | orchestrator | changed: [testbed-node-4] 2025-05-31 19:46:34.917351 | orchestrator | changed: [testbed-node-5] 2025-05-31 19:46:34.918365 | orchestrator | 2025-05-31 19:46:34.919112 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-05-31 19:46:34.919937 | orchestrator | Saturday 31 May 2025 19:46:34 +0000 (0:00:01.007) 0:00:04.697 ********** 2025-05-31 19:46:36.005720 | orchestrator | ok: [testbed-node-0] 2025-05-31 19:46:36.006169 | orchestrator | ok: [testbed-node-1] 2025-05-31 19:46:36.007976 | orchestrator | ok: [testbed-node-2] 2025-05-31 19:46:36.008117 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:46:36.009184 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:46:36.009768 | orchestrator | ok: [testbed-manager] 2025-05-31 19:46:36.010845 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:46:36.011145 | orchestrator | 2025-05-31 19:46:36.012216 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-05-31 19:46:36.012849 | orchestrator | Saturday 31 May 2025 19:46:35 +0000 (0:00:01.093) 0:00:05.790 ********** 2025-05-31 19:46:36.363961 | orchestrator | skipping: [testbed-node-0] 2025-05-31 19:46:36.441799 | orchestrator | skipping: [testbed-node-1] 2025-05-31 19:46:36.521214 | orchestrator | skipping: [testbed-node-2] 2025-05-31 19:46:36.591870 | orchestrator | changed: [testbed-manager] 2025-05-31 19:46:36.699845 | orchestrator | skipping: [testbed-node-3] 2025-05-31 19:46:36.702960 | orchestrator | skipping: [testbed-node-4] 2025-05-31 19:46:36.702997 | orchestrator | skipping: [testbed-node-5] 2025-05-31 19:46:36.703010 | orchestrator | 2025-05-31 19:46:36.703313 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-05-31 19:46:36.704075 | orchestrator | Saturday 31 May 2025 19:46:36 +0000 (0:00:00.693) 0:00:06.484 ********** 2025-05-31 19:46:49.247735 | orchestrator | changed: [testbed-manager] 2025-05-31 19:46:49.247853 | orchestrator | changed: [testbed-node-1] 2025-05-31 19:46:49.248379 | orchestrator | changed: [testbed-node-5] 2025-05-31 19:46:49.249270 | orchestrator | changed: [testbed-node-3] 2025-05-31 19:46:49.250564 | orchestrator | changed: [testbed-node-2] 2025-05-31 19:46:49.251402 | orchestrator | changed: [testbed-node-4] 2025-05-31 19:46:49.253234 | orchestrator | changed: [testbed-node-0] 2025-05-31 19:46:49.254105 | orchestrator | 2025-05-31 19:46:49.254454 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-05-31 19:46:49.255544 | orchestrator | Saturday 31 May 2025 19:46:49 +0000 (0:00:12.545) 0:00:19.029 ********** 2025-05-31 19:46:50.574285 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 19:46:50.575757 | orchestrator | 2025-05-31 19:46:50.576744 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-05-31 19:46:50.577864 | orchestrator | Saturday 31 May 2025 19:46:50 +0000 (0:00:01.326) 0:00:20.356 ********** 2025-05-31 19:46:52.455772 | orchestrator | changed: [testbed-manager] 2025-05-31 19:46:52.456823 | orchestrator | changed: [testbed-node-1] 2025-05-31 19:46:52.459013 | orchestrator | changed: [testbed-node-3] 2025-05-31 19:46:52.460177 | orchestrator | changed: [testbed-node-2] 2025-05-31 19:46:52.462359 | orchestrator | changed: [testbed-node-0] 2025-05-31 19:46:52.462423 | orchestrator | changed: [testbed-node-4] 2025-05-31 19:46:52.463011 | orchestrator | changed: [testbed-node-5] 2025-05-31 19:46:52.464021 | orchestrator | 2025-05-31 19:46:52.464987 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 19:46:52.465475 | orchestrator | 2025-05-31 19:46:52 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-31 19:46:52.467435 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 19:46:52.467546 | orchestrator | 2025-05-31 19:46:52 | INFO  | Please wait and do not abort execution. 2025-05-31 19:46:52.472836 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-31 19:46:52.475799 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-31 19:46:52.477029 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-31 19:46:52.477632 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-31 19:46:52.479297 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-31 19:46:52.480941 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-31 19:46:52.481930 | orchestrator | 2025-05-31 19:46:52.482644 | orchestrator | 2025-05-31 19:46:52.483851 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-31 19:46:52.484382 | orchestrator | Saturday 31 May 2025 19:46:52 +0000 (0:00:01.884) 0:00:22.241 ********** 2025-05-31 19:46:52.485761 | orchestrator | =============================================================================== 2025-05-31 19:46:52.487203 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 12.55s 2025-05-31 19:46:52.487805 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.88s 2025-05-31 19:46:52.488564 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.86s 2025-05-31 19:46:52.489198 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.33s 2025-05-31 19:46:52.489844 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.09s 2025-05-31 19:46:52.490500 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.01s 2025-05-31 19:46:52.491218 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.01s 2025-05-31 19:46:52.491929 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.69s 2025-05-31 19:46:52.492194 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.60s 2025-05-31 19:46:52.999946 | orchestrator | + sudo systemctl restart docker-compose@manager 2025-05-31 19:46:54.276161 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-05-31 19:46:54.276276 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-05-31 19:46:54.276291 | orchestrator | + local max_attempts=60 2025-05-31 19:46:54.276303 | orchestrator | + local name=ceph-ansible 2025-05-31 19:46:54.276314 | orchestrator | + local attempt_num=1 2025-05-31 19:46:54.276403 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-05-31 19:46:54.311773 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-31 19:46:54.311870 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-05-31 19:46:54.311885 | orchestrator | + local max_attempts=60 2025-05-31 19:46:54.311897 | orchestrator | + local name=kolla-ansible 2025-05-31 19:46:54.311908 | orchestrator | + local attempt_num=1 2025-05-31 19:46:54.311919 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-05-31 19:46:54.340769 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-31 19:46:54.340870 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-05-31 19:46:54.340885 | orchestrator | + local max_attempts=60 2025-05-31 19:46:54.340897 | orchestrator | + local name=osism-ansible 2025-05-31 19:46:54.340908 | orchestrator | + local attempt_num=1 2025-05-31 19:46:54.341058 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-05-31 19:46:54.369882 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-31 19:46:54.369975 | orchestrator | + [[ true == \t\r\u\e ]] 2025-05-31 19:46:54.369989 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-05-31 19:46:54.530469 | orchestrator | ARA in ceph-ansible already disabled. 2025-05-31 19:46:54.696065 | orchestrator | ARA in kolla-ansible already disabled. 2025-05-31 19:46:54.863885 | orchestrator | ARA in osism-ansible already disabled. 2025-05-31 19:46:55.016760 | orchestrator | ARA in osism-kubernetes already disabled. 2025-05-31 19:46:55.017057 | orchestrator | + osism apply gather-facts 2025-05-31 19:46:56.639133 | orchestrator | Registering Redlock._acquired_script 2025-05-31 19:46:56.639236 | orchestrator | Registering Redlock._extend_script 2025-05-31 19:46:56.639250 | orchestrator | Registering Redlock._release_script 2025-05-31 19:46:56.694477 | orchestrator | 2025-05-31 19:46:56 | INFO  | Task 9429ff40-b253-4fff-9592-5f1e17aec4e3 (gather-facts) was prepared for execution. 2025-05-31 19:46:56.694638 | orchestrator | 2025-05-31 19:46:56 | INFO  | It takes a moment until task 9429ff40-b253-4fff-9592-5f1e17aec4e3 (gather-facts) has been started and output is visible here. 2025-05-31 19:47:00.541012 | orchestrator | 2025-05-31 19:47:00.541187 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-31 19:47:00.541995 | orchestrator | 2025-05-31 19:47:00.543697 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-31 19:47:00.543723 | orchestrator | Saturday 31 May 2025 19:47:00 +0000 (0:00:00.215) 0:00:00.215 ********** 2025-05-31 19:47:05.707319 | orchestrator | ok: [testbed-node-2] 2025-05-31 19:47:05.707758 | orchestrator | ok: [testbed-node-1] 2025-05-31 19:47:05.708543 | orchestrator | ok: [testbed-node-0] 2025-05-31 19:47:05.709062 | orchestrator | ok: [testbed-manager] 2025-05-31 19:47:05.709810 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:47:05.710648 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:47:05.713392 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:47:05.713475 | orchestrator | 2025-05-31 19:47:05.713516 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-05-31 19:47:05.713530 | orchestrator | 2025-05-31 19:47:05.713812 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-05-31 19:47:05.714240 | orchestrator | Saturday 31 May 2025 19:47:05 +0000 (0:00:05.169) 0:00:05.385 ********** 2025-05-31 19:47:05.858761 | orchestrator | skipping: [testbed-manager] 2025-05-31 19:47:05.942666 | orchestrator | skipping: [testbed-node-0] 2025-05-31 19:47:06.018396 | orchestrator | skipping: [testbed-node-1] 2025-05-31 19:47:06.098717 | orchestrator | skipping: [testbed-node-2] 2025-05-31 19:47:06.171152 | orchestrator | skipping: [testbed-node-3] 2025-05-31 19:47:06.204928 | orchestrator | skipping: [testbed-node-4] 2025-05-31 19:47:06.205386 | orchestrator | skipping: [testbed-node-5] 2025-05-31 19:47:06.206270 | orchestrator | 2025-05-31 19:47:06.207542 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 19:47:06.207822 | orchestrator | 2025-05-31 19:47:06 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-31 19:47:06.208046 | orchestrator | 2025-05-31 19:47:06 | INFO  | Please wait and do not abort execution. 2025-05-31 19:47:06.209173 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-31 19:47:06.210291 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-31 19:47:06.210987 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-31 19:47:06.211576 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-31 19:47:06.212569 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-31 19:47:06.213000 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-31 19:47:06.213926 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-31 19:47:06.214379 | orchestrator | 2025-05-31 19:47:06.214895 | orchestrator | 2025-05-31 19:47:06.215225 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-31 19:47:06.215613 | orchestrator | Saturday 31 May 2025 19:47:06 +0000 (0:00:00.498) 0:00:05.884 ********** 2025-05-31 19:47:06.216048 | orchestrator | =============================================================================== 2025-05-31 19:47:06.216415 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.17s 2025-05-31 19:47:06.216781 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.50s 2025-05-31 19:47:06.776083 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-05-31 19:47:06.788556 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-05-31 19:47:06.803891 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-05-31 19:47:06.814191 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-05-31 19:47:06.828805 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-05-31 19:47:06.839624 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-05-31 19:47:06.849734 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-05-31 19:47:06.861722 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-05-31 19:47:06.872094 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-05-31 19:47:06.893272 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-05-31 19:47:06.906552 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-05-31 19:47:06.923790 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-05-31 19:47:06.939675 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-05-31 19:47:06.956063 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-05-31 19:47:06.974214 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-05-31 19:47:06.993209 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-05-31 19:47:07.006784 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-05-31 19:47:07.026587 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-05-31 19:47:07.044135 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-05-31 19:47:07.057309 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-05-31 19:47:07.074837 | orchestrator | + [[ false == \t\r\u\e ]] 2025-05-31 19:47:07.465410 | orchestrator | ok: Runtime: 0:18:56.778109 2025-05-31 19:47:07.563777 | 2025-05-31 19:47:07.563914 | TASK [Deploy services] 2025-05-31 19:47:08.101628 | orchestrator | skipping: Conditional result was False 2025-05-31 19:47:08.119909 | 2025-05-31 19:47:08.120073 | TASK [Deploy in a nutshell] 2025-05-31 19:47:08.818865 | orchestrator | + set -e 2025-05-31 19:47:08.819053 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-05-31 19:47:08.819078 | orchestrator | ++ export INTERACTIVE=false 2025-05-31 19:47:08.819125 | orchestrator | ++ INTERACTIVE=false 2025-05-31 19:47:08.819141 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-05-31 19:47:08.819154 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-05-31 19:47:08.819167 | orchestrator | + source /opt/manager-vars.sh 2025-05-31 19:47:08.819228 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-05-31 19:47:08.819257 | orchestrator | ++ NUMBER_OF_NODES=6 2025-05-31 19:47:08.819275 | orchestrator | ++ export CEPH_VERSION=reef 2025-05-31 19:47:08.819299 | orchestrator | ++ CEPH_VERSION=reef 2025-05-31 19:47:08.819319 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-05-31 19:47:08.819346 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-05-31 19:47:08.819365 | orchestrator | ++ export MANAGER_VERSION=latest 2025-05-31 19:47:08.819390 | orchestrator | ++ MANAGER_VERSION=latest 2025-05-31 19:47:08.819401 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-05-31 19:47:08.819415 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-05-31 19:47:08.819426 | orchestrator | ++ export ARA=false 2025-05-31 19:47:08.819437 | orchestrator | ++ ARA=false 2025-05-31 19:47:08.819448 | orchestrator | ++ export DEPLOY_MODE=manager 2025-05-31 19:47:08.819460 | orchestrator | ++ DEPLOY_MODE=manager 2025-05-31 19:47:08.819471 | orchestrator | ++ export TEMPEST=false 2025-05-31 19:47:08.819481 | orchestrator | ++ TEMPEST=false 2025-05-31 19:47:08.819529 | orchestrator | ++ export IS_ZUUL=true 2025-05-31 19:47:08.819540 | orchestrator | ++ IS_ZUUL=true 2025-05-31 19:47:08.819551 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.24 2025-05-31 19:47:08.819563 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.24 2025-05-31 19:47:08.819574 | orchestrator | 2025-05-31 19:47:08.819585 | orchestrator | # PULL IMAGES 2025-05-31 19:47:08.819596 | orchestrator | 2025-05-31 19:47:08.819607 | orchestrator | ++ export EXTERNAL_API=false 2025-05-31 19:47:08.819617 | orchestrator | ++ EXTERNAL_API=false 2025-05-31 19:47:08.819628 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-05-31 19:47:08.819639 | orchestrator | ++ IMAGE_USER=ubuntu 2025-05-31 19:47:08.819650 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-05-31 19:47:08.819661 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-05-31 19:47:08.819672 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-05-31 19:47:08.819690 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-05-31 19:47:08.819701 | orchestrator | + echo 2025-05-31 19:47:08.819712 | orchestrator | + echo '# PULL IMAGES' 2025-05-31 19:47:08.819723 | orchestrator | + echo 2025-05-31 19:47:08.820437 | orchestrator | ++ semver latest 7.0.0 2025-05-31 19:47:08.878715 | orchestrator | + [[ -1 -ge 0 ]] 2025-05-31 19:47:08.878798 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-05-31 19:47:08.878812 | orchestrator | + osism apply -r 2 -e custom pull-images 2025-05-31 19:47:10.531929 | orchestrator | 2025-05-31 19:47:10 | INFO  | Trying to run play pull-images in environment custom 2025-05-31 19:47:10.537109 | orchestrator | Registering Redlock._acquired_script 2025-05-31 19:47:10.537161 | orchestrator | Registering Redlock._extend_script 2025-05-31 19:47:10.537184 | orchestrator | Registering Redlock._release_script 2025-05-31 19:47:10.605192 | orchestrator | 2025-05-31 19:47:10 | INFO  | Task 05397407-63f6-439d-a652-cd4048318f3d (pull-images) was prepared for execution. 2025-05-31 19:47:10.605256 | orchestrator | 2025-05-31 19:47:10 | INFO  | It takes a moment until task 05397407-63f6-439d-a652-cd4048318f3d (pull-images) has been started and output is visible here. 2025-05-31 19:47:14.450727 | orchestrator | 2025-05-31 19:47:14.450850 | orchestrator | PLAY [Pull images] ************************************************************* 2025-05-31 19:47:14.451609 | orchestrator | 2025-05-31 19:47:14.452288 | orchestrator | TASK [Pull keystone image] ***************************************************** 2025-05-31 19:47:14.453175 | orchestrator | Saturday 31 May 2025 19:47:14 +0000 (0:00:00.141) 0:00:00.141 ********** 2025-05-31 19:48:24.271581 | orchestrator | changed: [testbed-manager] 2025-05-31 19:48:24.271678 | orchestrator | 2025-05-31 19:48:24.272117 | orchestrator | TASK [Pull other images] ******************************************************* 2025-05-31 19:48:24.273470 | orchestrator | Saturday 31 May 2025 19:48:24 +0000 (0:01:09.822) 0:01:09.963 ********** 2025-05-31 19:49:15.623884 | orchestrator | changed: [testbed-manager] => (item=aodh) 2025-05-31 19:49:15.624060 | orchestrator | changed: [testbed-manager] => (item=barbican) 2025-05-31 19:49:15.625525 | orchestrator | changed: [testbed-manager] => (item=ceilometer) 2025-05-31 19:49:15.627260 | orchestrator | changed: [testbed-manager] => (item=cinder) 2025-05-31 19:49:15.627749 | orchestrator | changed: [testbed-manager] => (item=common) 2025-05-31 19:49:15.630362 | orchestrator | changed: [testbed-manager] => (item=designate) 2025-05-31 19:49:15.631281 | orchestrator | changed: [testbed-manager] => (item=glance) 2025-05-31 19:49:15.631956 | orchestrator | changed: [testbed-manager] => (item=grafana) 2025-05-31 19:49:15.632714 | orchestrator | changed: [testbed-manager] => (item=horizon) 2025-05-31 19:49:15.633672 | orchestrator | changed: [testbed-manager] => (item=ironic) 2025-05-31 19:49:15.634381 | orchestrator | changed: [testbed-manager] => (item=loadbalancer) 2025-05-31 19:49:15.635093 | orchestrator | changed: [testbed-manager] => (item=magnum) 2025-05-31 19:49:15.635844 | orchestrator | changed: [testbed-manager] => (item=mariadb) 2025-05-31 19:49:15.637042 | orchestrator | changed: [testbed-manager] => (item=memcached) 2025-05-31 19:49:15.637098 | orchestrator | changed: [testbed-manager] => (item=neutron) 2025-05-31 19:49:15.638259 | orchestrator | changed: [testbed-manager] => (item=nova) 2025-05-31 19:49:15.639233 | orchestrator | changed: [testbed-manager] => (item=octavia) 2025-05-31 19:49:15.639616 | orchestrator | changed: [testbed-manager] => (item=opensearch) 2025-05-31 19:49:15.640062 | orchestrator | changed: [testbed-manager] => (item=openvswitch) 2025-05-31 19:49:15.640822 | orchestrator | changed: [testbed-manager] => (item=ovn) 2025-05-31 19:49:15.641756 | orchestrator | changed: [testbed-manager] => (item=placement) 2025-05-31 19:49:15.642226 | orchestrator | changed: [testbed-manager] => (item=rabbitmq) 2025-05-31 19:49:15.642971 | orchestrator | changed: [testbed-manager] => (item=redis) 2025-05-31 19:49:15.643787 | orchestrator | changed: [testbed-manager] => (item=skyline) 2025-05-31 19:49:15.644162 | orchestrator | 2025-05-31 19:49:15.644569 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 19:49:15.646890 | orchestrator | 2025-05-31 19:49:15 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-31 19:49:15.647669 | orchestrator | 2025-05-31 19:49:15 | INFO  | Please wait and do not abort execution. 2025-05-31 19:49:15.649741 | orchestrator | testbed-manager : ok=2  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 19:49:15.650365 | orchestrator | 2025-05-31 19:49:15.651451 | orchestrator | 2025-05-31 19:49:15.652846 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-31 19:49:15.653930 | orchestrator | Saturday 31 May 2025 19:49:15 +0000 (0:00:51.348) 0:02:01.312 ********** 2025-05-31 19:49:15.655123 | orchestrator | =============================================================================== 2025-05-31 19:49:15.655776 | orchestrator | Pull keystone image ---------------------------------------------------- 69.82s 2025-05-31 19:49:15.656449 | orchestrator | Pull other images ------------------------------------------------------ 51.35s 2025-05-31 19:49:17.858364 | orchestrator | 2025-05-31 19:49:17 | INFO  | Trying to run play wipe-partitions in environment custom 2025-05-31 19:49:17.862844 | orchestrator | Registering Redlock._acquired_script 2025-05-31 19:49:17.862900 | orchestrator | Registering Redlock._extend_script 2025-05-31 19:49:17.862913 | orchestrator | Registering Redlock._release_script 2025-05-31 19:49:17.928067 | orchestrator | 2025-05-31 19:49:17 | INFO  | Task f23d36bb-1697-44b6-ae5b-6d3eb84e3ce4 (wipe-partitions) was prepared for execution. 2025-05-31 19:49:17.928175 | orchestrator | 2025-05-31 19:49:17 | INFO  | It takes a moment until task f23d36bb-1697-44b6-ae5b-6d3eb84e3ce4 (wipe-partitions) has been started and output is visible here. 2025-05-31 19:49:21.920806 | orchestrator | 2025-05-31 19:49:21.921096 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-05-31 19:49:21.922449 | orchestrator | 2025-05-31 19:49:21.924841 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-05-31 19:49:21.925480 | orchestrator | Saturday 31 May 2025 19:49:21 +0000 (0:00:00.132) 0:00:00.132 ********** 2025-05-31 19:49:22.488842 | orchestrator | changed: [testbed-node-3] 2025-05-31 19:49:22.488954 | orchestrator | changed: [testbed-node-4] 2025-05-31 19:49:22.488969 | orchestrator | changed: [testbed-node-5] 2025-05-31 19:49:22.489201 | orchestrator | 2025-05-31 19:49:22.489530 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-05-31 19:49:22.489787 | orchestrator | Saturday 31 May 2025 19:49:22 +0000 (0:00:00.569) 0:00:00.701 ********** 2025-05-31 19:49:22.633695 | orchestrator | skipping: [testbed-node-3] 2025-05-31 19:49:22.724649 | orchestrator | skipping: [testbed-node-4] 2025-05-31 19:49:22.724752 | orchestrator | skipping: [testbed-node-5] 2025-05-31 19:49:22.724881 | orchestrator | 2025-05-31 19:49:22.725280 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-05-31 19:49:22.725575 | orchestrator | Saturday 31 May 2025 19:49:22 +0000 (0:00:00.236) 0:00:00.937 ********** 2025-05-31 19:49:23.524290 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:49:23.525301 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:49:23.525354 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:49:23.525394 | orchestrator | 2025-05-31 19:49:23.525408 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-05-31 19:49:23.526528 | orchestrator | Saturday 31 May 2025 19:49:23 +0000 (0:00:00.798) 0:00:01.736 ********** 2025-05-31 19:49:23.684356 | orchestrator | skipping: [testbed-node-3] 2025-05-31 19:49:23.786229 | orchestrator | skipping: [testbed-node-4] 2025-05-31 19:49:23.786371 | orchestrator | skipping: [testbed-node-5] 2025-05-31 19:49:23.786385 | orchestrator | 2025-05-31 19:49:23.786452 | orchestrator | TASK [Check device availability] *********************************************** 2025-05-31 19:49:23.787665 | orchestrator | Saturday 31 May 2025 19:49:23 +0000 (0:00:00.264) 0:00:02.000 ********** 2025-05-31 19:49:24.978288 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-05-31 19:49:24.978409 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-05-31 19:49:24.980377 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-05-31 19:49:24.981472 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-05-31 19:49:24.981746 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-05-31 19:49:24.982160 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-05-31 19:49:24.982271 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-05-31 19:49:24.982739 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-05-31 19:49:24.983036 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-05-31 19:49:24.983297 | orchestrator | 2025-05-31 19:49:24.983835 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-05-31 19:49:24.984174 | orchestrator | Saturday 31 May 2025 19:49:24 +0000 (0:00:01.190) 0:00:03.191 ********** 2025-05-31 19:49:26.401866 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-05-31 19:49:26.405562 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-05-31 19:49:26.405897 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-05-31 19:49:26.406638 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-05-31 19:49:26.409728 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-05-31 19:49:26.410276 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-05-31 19:49:26.410594 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-05-31 19:49:26.410887 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-05-31 19:49:26.412028 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-05-31 19:49:26.412045 | orchestrator | 2025-05-31 19:49:26.412252 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-05-31 19:49:26.412812 | orchestrator | Saturday 31 May 2025 19:49:26 +0000 (0:00:01.418) 0:00:04.609 ********** 2025-05-31 19:49:28.835867 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-05-31 19:49:28.836703 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-05-31 19:49:28.837598 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-05-31 19:49:28.838650 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-05-31 19:49:28.842193 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-05-31 19:49:28.845033 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-05-31 19:49:28.845196 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-05-31 19:49:28.845269 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-05-31 19:49:28.846094 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-05-31 19:49:28.847833 | orchestrator | 2025-05-31 19:49:28.848580 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-05-31 19:49:28.849441 | orchestrator | Saturday 31 May 2025 19:49:28 +0000 (0:00:02.440) 0:00:07.049 ********** 2025-05-31 19:49:29.421125 | orchestrator | changed: [testbed-node-3] 2025-05-31 19:49:29.422124 | orchestrator | changed: [testbed-node-4] 2025-05-31 19:49:29.422221 | orchestrator | changed: [testbed-node-5] 2025-05-31 19:49:29.422441 | orchestrator | 2025-05-31 19:49:29.422801 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-05-31 19:49:29.423039 | orchestrator | Saturday 31 May 2025 19:49:29 +0000 (0:00:00.584) 0:00:07.633 ********** 2025-05-31 19:49:29.982394 | orchestrator | changed: [testbed-node-3] 2025-05-31 19:49:29.982495 | orchestrator | changed: [testbed-node-4] 2025-05-31 19:49:29.983016 | orchestrator | changed: [testbed-node-5] 2025-05-31 19:49:29.983686 | orchestrator | 2025-05-31 19:49:29.986411 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 19:49:29.986576 | orchestrator | 2025-05-31 19:49:29 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-31 19:49:29.986601 | orchestrator | 2025-05-31 19:49:29 | INFO  | Please wait and do not abort execution. 2025-05-31 19:49:29.987363 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-31 19:49:29.988306 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-31 19:49:29.989261 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-31 19:49:29.990191 | orchestrator | 2025-05-31 19:49:29.990933 | orchestrator | 2025-05-31 19:49:29.991206 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-31 19:49:29.991694 | orchestrator | Saturday 31 May 2025 19:49:29 +0000 (0:00:00.560) 0:00:08.193 ********** 2025-05-31 19:49:29.996692 | orchestrator | =============================================================================== 2025-05-31 19:49:29.996749 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.44s 2025-05-31 19:49:29.996770 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.42s 2025-05-31 19:49:29.996790 | orchestrator | Check device availability ----------------------------------------------- 1.19s 2025-05-31 19:49:29.996810 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.80s 2025-05-31 19:49:29.996832 | orchestrator | Reload udev rules ------------------------------------------------------- 0.58s 2025-05-31 19:49:29.997304 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.57s 2025-05-31 19:49:29.997447 | orchestrator | Request device events from the kernel ----------------------------------- 0.56s 2025-05-31 19:49:29.997615 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.26s 2025-05-31 19:49:29.998259 | orchestrator | Remove all rook related logical devices --------------------------------- 0.24s 2025-05-31 19:49:31.926475 | orchestrator | Registering Redlock._acquired_script 2025-05-31 19:49:31.926637 | orchestrator | Registering Redlock._extend_script 2025-05-31 19:49:31.926653 | orchestrator | Registering Redlock._release_script 2025-05-31 19:49:31.984878 | orchestrator | 2025-05-31 19:49:31 | INFO  | Task 51be8e38-6907-4c4e-8b31-9b216fd2e954 (facts) was prepared for execution. 2025-05-31 19:49:31.984995 | orchestrator | 2025-05-31 19:49:31 | INFO  | It takes a moment until task 51be8e38-6907-4c4e-8b31-9b216fd2e954 (facts) has been started and output is visible here. 2025-05-31 19:49:35.631982 | orchestrator | 2025-05-31 19:49:35.632130 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-05-31 19:49:35.632151 | orchestrator | 2025-05-31 19:49:35.632234 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-05-31 19:49:35.632251 | orchestrator | Saturday 31 May 2025 19:49:35 +0000 (0:00:00.229) 0:00:00.229 ********** 2025-05-31 19:49:36.603179 | orchestrator | ok: [testbed-manager] 2025-05-31 19:49:36.606266 | orchestrator | ok: [testbed-node-0] 2025-05-31 19:49:36.606316 | orchestrator | ok: [testbed-node-1] 2025-05-31 19:49:36.606329 | orchestrator | ok: [testbed-node-2] 2025-05-31 19:49:36.607980 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:49:36.609897 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:49:36.610613 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:49:36.611674 | orchestrator | 2025-05-31 19:49:36.612968 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-05-31 19:49:36.614075 | orchestrator | Saturday 31 May 2025 19:49:36 +0000 (0:00:00.968) 0:00:01.197 ********** 2025-05-31 19:49:36.748204 | orchestrator | skipping: [testbed-manager] 2025-05-31 19:49:36.817179 | orchestrator | skipping: [testbed-node-0] 2025-05-31 19:49:36.888078 | orchestrator | skipping: [testbed-node-1] 2025-05-31 19:49:36.959924 | orchestrator | skipping: [testbed-node-2] 2025-05-31 19:49:37.029837 | orchestrator | skipping: [testbed-node-3] 2025-05-31 19:49:37.669363 | orchestrator | skipping: [testbed-node-4] 2025-05-31 19:49:37.670081 | orchestrator | skipping: [testbed-node-5] 2025-05-31 19:49:37.672615 | orchestrator | 2025-05-31 19:49:37.673263 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-31 19:49:37.673935 | orchestrator | 2025-05-31 19:49:37.674575 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-31 19:49:37.677456 | orchestrator | Saturday 31 May 2025 19:49:37 +0000 (0:00:01.062) 0:00:02.260 ********** 2025-05-31 19:49:43.018768 | orchestrator | ok: [testbed-node-0] 2025-05-31 19:49:43.018855 | orchestrator | ok: [testbed-node-1] 2025-05-31 19:49:43.018862 | orchestrator | ok: [testbed-node-2] 2025-05-31 19:49:43.018869 | orchestrator | ok: [testbed-manager] 2025-05-31 19:49:43.019149 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:49:43.019885 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:49:43.020146 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:49:43.021248 | orchestrator | 2025-05-31 19:49:43.022346 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-05-31 19:49:43.023386 | orchestrator | 2025-05-31 19:49:43.024899 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-05-31 19:49:43.024972 | orchestrator | Saturday 31 May 2025 19:49:43 +0000 (0:00:05.357) 0:00:07.617 ********** 2025-05-31 19:49:43.386641 | orchestrator | skipping: [testbed-manager] 2025-05-31 19:49:43.463998 | orchestrator | skipping: [testbed-node-0] 2025-05-31 19:49:43.542454 | orchestrator | skipping: [testbed-node-1] 2025-05-31 19:49:43.619465 | orchestrator | skipping: [testbed-node-2] 2025-05-31 19:49:43.692995 | orchestrator | skipping: [testbed-node-3] 2025-05-31 19:49:43.738913 | orchestrator | skipping: [testbed-node-4] 2025-05-31 19:49:43.740135 | orchestrator | skipping: [testbed-node-5] 2025-05-31 19:49:43.741738 | orchestrator | 2025-05-31 19:49:43.743179 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 19:49:43.743663 | orchestrator | 2025-05-31 19:49:43 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-31 19:49:43.744654 | orchestrator | 2025-05-31 19:49:43 | INFO  | Please wait and do not abort execution. 2025-05-31 19:49:43.746586 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-31 19:49:43.747685 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-31 19:49:43.748924 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-31 19:49:43.750069 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-31 19:49:43.751212 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-31 19:49:43.751917 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-31 19:49:43.753872 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-31 19:49:43.753909 | orchestrator | 2025-05-31 19:49:43.754631 | orchestrator | 2025-05-31 19:49:43.755103 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-31 19:49:43.755970 | orchestrator | Saturday 31 May 2025 19:49:43 +0000 (0:00:00.719) 0:00:08.337 ********** 2025-05-31 19:49:43.756172 | orchestrator | =============================================================================== 2025-05-31 19:49:43.756601 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.36s 2025-05-31 19:49:43.757013 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.06s 2025-05-31 19:49:43.757450 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 0.97s 2025-05-31 19:49:43.758001 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.72s 2025-05-31 19:49:46.131581 | orchestrator | 2025-05-31 19:49:46 | INFO  | Task b45d4d7d-5a36-4c60-ac3f-c3494682dd53 (ceph-configure-lvm-volumes) was prepared for execution. 2025-05-31 19:49:46.131691 | orchestrator | 2025-05-31 19:49:46 | INFO  | It takes a moment until task b45d4d7d-5a36-4c60-ac3f-c3494682dd53 (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-05-31 19:49:50.559241 | orchestrator | 2025-05-31 19:49:50.563431 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-05-31 19:49:50.563469 | orchestrator | 2025-05-31 19:49:50.568285 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-31 19:49:50.569181 | orchestrator | Saturday 31 May 2025 19:49:50 +0000 (0:00:00.331) 0:00:00.331 ********** 2025-05-31 19:49:50.850434 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-31 19:49:50.850628 | orchestrator | 2025-05-31 19:49:50.851032 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-31 19:49:50.851412 | orchestrator | Saturday 31 May 2025 19:49:50 +0000 (0:00:00.295) 0:00:00.626 ********** 2025-05-31 19:49:51.088630 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:49:51.088717 | orchestrator | 2025-05-31 19:49:51.089807 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 19:49:51.090984 | orchestrator | Saturday 31 May 2025 19:49:51 +0000 (0:00:00.233) 0:00:00.860 ********** 2025-05-31 19:49:51.443705 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-05-31 19:49:51.446559 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-05-31 19:49:51.448618 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-05-31 19:49:51.449977 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-05-31 19:49:51.452369 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-05-31 19:49:51.454114 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-05-31 19:49:51.455562 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-05-31 19:49:51.457426 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-05-31 19:49:51.458842 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-05-31 19:49:51.459698 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-05-31 19:49:51.460726 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-05-31 19:49:51.461593 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-05-31 19:49:51.463680 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-05-31 19:49:51.464653 | orchestrator | 2025-05-31 19:49:51.466138 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 19:49:51.466169 | orchestrator | Saturday 31 May 2025 19:49:51 +0000 (0:00:00.357) 0:00:01.218 ********** 2025-05-31 19:49:52.017600 | orchestrator | skipping: [testbed-node-3] 2025-05-31 19:49:52.017715 | orchestrator | 2025-05-31 19:49:52.017731 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 19:49:52.018113 | orchestrator | Saturday 31 May 2025 19:49:52 +0000 (0:00:00.575) 0:00:01.793 ********** 2025-05-31 19:49:52.169615 | orchestrator | skipping: [testbed-node-3] 2025-05-31 19:49:52.170445 | orchestrator | 2025-05-31 19:49:52.173054 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 19:49:52.173328 | orchestrator | Saturday 31 May 2025 19:49:52 +0000 (0:00:00.153) 0:00:01.946 ********** 2025-05-31 19:49:52.352148 | orchestrator | skipping: [testbed-node-3] 2025-05-31 19:49:52.352844 | orchestrator | 2025-05-31 19:49:52.353453 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 19:49:52.358593 | orchestrator | Saturday 31 May 2025 19:49:52 +0000 (0:00:00.182) 0:00:02.129 ********** 2025-05-31 19:49:52.536596 | orchestrator | skipping: [testbed-node-3] 2025-05-31 19:49:52.536681 | orchestrator | 2025-05-31 19:49:52.536689 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 19:49:52.536696 | orchestrator | Saturday 31 May 2025 19:49:52 +0000 (0:00:00.184) 0:00:02.313 ********** 2025-05-31 19:49:52.694297 | orchestrator | skipping: [testbed-node-3] 2025-05-31 19:49:52.695202 | orchestrator | 2025-05-31 19:49:52.695233 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 19:49:52.695247 | orchestrator | Saturday 31 May 2025 19:49:52 +0000 (0:00:00.158) 0:00:02.472 ********** 2025-05-31 19:49:52.851882 | orchestrator | skipping: [testbed-node-3] 2025-05-31 19:49:52.852193 | orchestrator | 2025-05-31 19:49:52.853289 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 19:49:52.854262 | orchestrator | Saturday 31 May 2025 19:49:52 +0000 (0:00:00.156) 0:00:02.629 ********** 2025-05-31 19:49:53.013241 | orchestrator | skipping: [testbed-node-3] 2025-05-31 19:49:53.013433 | orchestrator | 2025-05-31 19:49:53.014197 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 19:49:53.014336 | orchestrator | Saturday 31 May 2025 19:49:53 +0000 (0:00:00.160) 0:00:02.789 ********** 2025-05-31 19:49:53.189243 | orchestrator | skipping: [testbed-node-3] 2025-05-31 19:49:53.189872 | orchestrator | 2025-05-31 19:49:53.190272 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 19:49:53.194114 | orchestrator | Saturday 31 May 2025 19:49:53 +0000 (0:00:00.177) 0:00:02.966 ********** 2025-05-31 19:49:53.558825 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_8bbc83dc-8efd-4101-bb1b-0d6e4523fdf4) 2025-05-31 19:49:53.558963 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_8bbc83dc-8efd-4101-bb1b-0d6e4523fdf4) 2025-05-31 19:49:53.559116 | orchestrator | 2025-05-31 19:49:53.559136 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 19:49:53.562149 | orchestrator | Saturday 31 May 2025 19:49:53 +0000 (0:00:00.363) 0:00:03.330 ********** 2025-05-31 19:49:53.921585 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_191d8892-ecee-415a-8f71-2d93b7558573) 2025-05-31 19:49:53.922205 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_191d8892-ecee-415a-8f71-2d93b7558573) 2025-05-31 19:49:53.923204 | orchestrator | 2025-05-31 19:49:53.924145 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 19:49:53.927052 | orchestrator | Saturday 31 May 2025 19:49:53 +0000 (0:00:00.368) 0:00:03.699 ********** 2025-05-31 19:49:54.463903 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_fb66f732-34d2-45e3-b1b8-d9ba2a3ac758) 2025-05-31 19:49:54.464794 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_fb66f732-34d2-45e3-b1b8-d9ba2a3ac758) 2025-05-31 19:49:54.465025 | orchestrator | 2025-05-31 19:49:54.465362 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 19:49:54.465730 | orchestrator | Saturday 31 May 2025 19:49:54 +0000 (0:00:00.540) 0:00:04.239 ********** 2025-05-31 19:49:55.000062 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_5a7b16a5-b25a-49dc-b8e1-bfe6cbb00610) 2025-05-31 19:49:55.000659 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_5a7b16a5-b25a-49dc-b8e1-bfe6cbb00610) 2025-05-31 19:49:55.001739 | orchestrator | 2025-05-31 19:49:55.004024 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 19:49:55.004115 | orchestrator | Saturday 31 May 2025 19:49:54 +0000 (0:00:00.537) 0:00:04.777 ********** 2025-05-31 19:49:55.566579 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-31 19:49:55.567531 | orchestrator | 2025-05-31 19:49:55.567572 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 19:49:55.567691 | orchestrator | Saturday 31 May 2025 19:49:55 +0000 (0:00:00.566) 0:00:05.343 ********** 2025-05-31 19:49:55.916800 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-05-31 19:49:55.916981 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-05-31 19:49:55.917674 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-05-31 19:49:55.920089 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-05-31 19:49:55.920150 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-05-31 19:49:55.920164 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-05-31 19:49:55.920222 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-05-31 19:49:55.920687 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-05-31 19:49:55.921156 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-05-31 19:49:55.921606 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-05-31 19:49:55.922092 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-05-31 19:49:55.922462 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-05-31 19:49:55.923067 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-05-31 19:49:55.923414 | orchestrator | 2025-05-31 19:49:55.923948 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 19:49:55.924250 | orchestrator | Saturday 31 May 2025 19:49:55 +0000 (0:00:00.347) 0:00:05.691 ********** 2025-05-31 19:49:56.097317 | orchestrator | skipping: [testbed-node-3] 2025-05-31 19:49:56.097573 | orchestrator | 2025-05-31 19:49:56.100182 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 19:49:56.100720 | orchestrator | Saturday 31 May 2025 19:49:56 +0000 (0:00:00.180) 0:00:05.871 ********** 2025-05-31 19:49:56.279918 | orchestrator | skipping: [testbed-node-3] 2025-05-31 19:49:56.280673 | orchestrator | 2025-05-31 19:49:56.284347 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 19:49:56.285235 | orchestrator | Saturday 31 May 2025 19:49:56 +0000 (0:00:00.184) 0:00:06.056 ********** 2025-05-31 19:49:56.459107 | orchestrator | skipping: [testbed-node-3] 2025-05-31 19:49:56.459702 | orchestrator | 2025-05-31 19:49:56.463231 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 19:49:56.464899 | orchestrator | Saturday 31 May 2025 19:49:56 +0000 (0:00:00.179) 0:00:06.236 ********** 2025-05-31 19:49:56.658389 | orchestrator | skipping: [testbed-node-3] 2025-05-31 19:49:56.664106 | orchestrator | 2025-05-31 19:49:56.664294 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 19:49:56.665921 | orchestrator | Saturday 31 May 2025 19:49:56 +0000 (0:00:00.199) 0:00:06.435 ********** 2025-05-31 19:49:56.849879 | orchestrator | skipping: [testbed-node-3] 2025-05-31 19:49:56.850594 | orchestrator | 2025-05-31 19:49:56.853001 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 19:49:56.854398 | orchestrator | Saturday 31 May 2025 19:49:56 +0000 (0:00:00.190) 0:00:06.625 ********** 2025-05-31 19:49:57.035092 | orchestrator | skipping: [testbed-node-3] 2025-05-31 19:49:57.036695 | orchestrator | 2025-05-31 19:49:57.038661 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 19:49:57.038944 | orchestrator | Saturday 31 May 2025 19:49:57 +0000 (0:00:00.185) 0:00:06.811 ********** 2025-05-31 19:49:57.229359 | orchestrator | skipping: [testbed-node-3] 2025-05-31 19:49:57.229493 | orchestrator | 2025-05-31 19:49:57.229627 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 19:49:57.230332 | orchestrator | Saturday 31 May 2025 19:49:57 +0000 (0:00:00.193) 0:00:07.004 ********** 2025-05-31 19:49:57.461893 | orchestrator | skipping: [testbed-node-3] 2025-05-31 19:49:57.462283 | orchestrator | 2025-05-31 19:49:57.462983 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 19:49:57.464472 | orchestrator | Saturday 31 May 2025 19:49:57 +0000 (0:00:00.232) 0:00:07.237 ********** 2025-05-31 19:49:58.487331 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-05-31 19:49:58.489021 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-05-31 19:49:58.489819 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-05-31 19:49:58.491667 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-05-31 19:49:58.492651 | orchestrator | 2025-05-31 19:49:58.493084 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 19:49:58.494225 | orchestrator | Saturday 31 May 2025 19:49:58 +0000 (0:00:01.020) 0:00:08.258 ********** 2025-05-31 19:49:58.712462 | orchestrator | skipping: [testbed-node-3] 2025-05-31 19:49:58.712700 | orchestrator | 2025-05-31 19:49:58.714555 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 19:49:58.715588 | orchestrator | Saturday 31 May 2025 19:49:58 +0000 (0:00:00.230) 0:00:08.488 ********** 2025-05-31 19:49:58.938107 | orchestrator | skipping: [testbed-node-3] 2025-05-31 19:49:58.939913 | orchestrator | 2025-05-31 19:49:58.942163 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 19:49:58.944158 | orchestrator | Saturday 31 May 2025 19:49:58 +0000 (0:00:00.223) 0:00:08.712 ********** 2025-05-31 19:49:59.137873 | orchestrator | skipping: [testbed-node-3] 2025-05-31 19:49:59.139265 | orchestrator | 2025-05-31 19:49:59.140865 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 19:49:59.142601 | orchestrator | Saturday 31 May 2025 19:49:59 +0000 (0:00:00.197) 0:00:08.909 ********** 2025-05-31 19:49:59.325435 | orchestrator | skipping: [testbed-node-3] 2025-05-31 19:49:59.325597 | orchestrator | 2025-05-31 19:49:59.326086 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-05-31 19:49:59.328269 | orchestrator | Saturday 31 May 2025 19:49:59 +0000 (0:00:00.189) 0:00:09.099 ********** 2025-05-31 19:49:59.502281 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-05-31 19:49:59.502420 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-05-31 19:49:59.502937 | orchestrator | 2025-05-31 19:49:59.504085 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-05-31 19:49:59.505095 | orchestrator | Saturday 31 May 2025 19:49:59 +0000 (0:00:00.178) 0:00:09.277 ********** 2025-05-31 19:49:59.635574 | orchestrator | skipping: [testbed-node-3] 2025-05-31 19:49:59.635683 | orchestrator | 2025-05-31 19:49:59.635838 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-05-31 19:49:59.636865 | orchestrator | Saturday 31 May 2025 19:49:59 +0000 (0:00:00.133) 0:00:09.410 ********** 2025-05-31 19:49:59.771796 | orchestrator | skipping: [testbed-node-3] 2025-05-31 19:49:59.773560 | orchestrator | 2025-05-31 19:49:59.777501 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-05-31 19:49:59.777625 | orchestrator | Saturday 31 May 2025 19:49:59 +0000 (0:00:00.135) 0:00:09.546 ********** 2025-05-31 19:49:59.908938 | orchestrator | skipping: [testbed-node-3] 2025-05-31 19:49:59.910091 | orchestrator | 2025-05-31 19:49:59.911372 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-05-31 19:49:59.912122 | orchestrator | Saturday 31 May 2025 19:49:59 +0000 (0:00:00.137) 0:00:09.683 ********** 2025-05-31 19:50:00.039683 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:50:00.040942 | orchestrator | 2025-05-31 19:50:00.041893 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-05-31 19:50:00.042853 | orchestrator | Saturday 31 May 2025 19:50:00 +0000 (0:00:00.131) 0:00:09.815 ********** 2025-05-31 19:50:00.211126 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '813d0644-8ada-5e52-b3d8-7484365c4567'}}) 2025-05-31 19:50:00.212961 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b37e5891-99ec-5ce8-8fa7-674876c21edd'}}) 2025-05-31 19:50:00.215014 | orchestrator | 2025-05-31 19:50:00.216844 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-05-31 19:50:00.218356 | orchestrator | Saturday 31 May 2025 19:50:00 +0000 (0:00:00.166) 0:00:09.981 ********** 2025-05-31 19:50:00.414410 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '813d0644-8ada-5e52-b3d8-7484365c4567'}})  2025-05-31 19:50:00.416395 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b37e5891-99ec-5ce8-8fa7-674876c21edd'}})  2025-05-31 19:50:00.426185 | orchestrator | skipping: [testbed-node-3] 2025-05-31 19:50:00.426919 | orchestrator | 2025-05-31 19:50:00.427798 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-05-31 19:50:00.429177 | orchestrator | Saturday 31 May 2025 19:50:00 +0000 (0:00:00.198) 0:00:10.180 ********** 2025-05-31 19:50:00.994013 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '813d0644-8ada-5e52-b3d8-7484365c4567'}})  2025-05-31 19:50:00.996266 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b37e5891-99ec-5ce8-8fa7-674876c21edd'}})  2025-05-31 19:50:00.997054 | orchestrator | skipping: [testbed-node-3] 2025-05-31 19:50:00.998560 | orchestrator | 2025-05-31 19:50:00.998812 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-05-31 19:50:01.000419 | orchestrator | Saturday 31 May 2025 19:50:00 +0000 (0:00:00.584) 0:00:10.765 ********** 2025-05-31 19:50:01.177240 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '813d0644-8ada-5e52-b3d8-7484365c4567'}})  2025-05-31 19:50:01.177476 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b37e5891-99ec-5ce8-8fa7-674876c21edd'}})  2025-05-31 19:50:01.177505 | orchestrator | skipping: [testbed-node-3] 2025-05-31 19:50:01.177604 | orchestrator | 2025-05-31 19:50:01.177661 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-05-31 19:50:01.177946 | orchestrator | Saturday 31 May 2025 19:50:01 +0000 (0:00:00.183) 0:00:10.949 ********** 2025-05-31 19:50:01.339616 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:50:01.339725 | orchestrator | 2025-05-31 19:50:01.340179 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-05-31 19:50:01.341139 | orchestrator | Saturday 31 May 2025 19:50:01 +0000 (0:00:00.162) 0:00:11.111 ********** 2025-05-31 19:50:01.489174 | orchestrator | ok: [testbed-node-3] 2025-05-31 19:50:01.491339 | orchestrator | 2025-05-31 19:50:01.491689 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-05-31 19:50:01.492601 | orchestrator | Saturday 31 May 2025 19:50:01 +0000 (0:00:00.149) 0:00:11.261 ********** 2025-05-31 19:50:01.649160 | orchestrator | skipping: [testbed-node-3] 2025-05-31 19:50:01.651122 | orchestrator | 2025-05-31 19:50:01.655182 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-05-31 19:50:01.655245 | orchestrator | Saturday 31 May 2025 19:50:01 +0000 (0:00:00.158) 0:00:11.419 ********** 2025-05-31 19:50:01.792769 | orchestrator | skipping: [testbed-node-3] 2025-05-31 19:50:01.793814 | orchestrator | 2025-05-31 19:50:01.795545 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-05-31 19:50:01.798117 | orchestrator | Saturday 31 May 2025 19:50:01 +0000 (0:00:00.148) 0:00:11.568 ********** 2025-05-31 19:50:01.920341 | orchestrator | skipping: [testbed-node-3] 2025-05-31 19:50:01.920837 | orchestrator | 2025-05-31 19:50:01.921508 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-05-31 19:50:01.921949 | orchestrator | Saturday 31 May 2025 19:50:01 +0000 (0:00:00.127) 0:00:11.695 ********** 2025-05-31 19:50:02.108559 | orchestrator | ok: [testbed-node-3] => { 2025-05-31 19:50:02.108736 | orchestrator |  "ceph_osd_devices": { 2025-05-31 19:50:02.109630 | orchestrator |  "sdb": { 2025-05-31 19:50:02.110445 | orchestrator |  "osd_lvm_uuid": "813d0644-8ada-5e52-b3d8-7484365c4567" 2025-05-31 19:50:02.112406 | orchestrator |  }, 2025-05-31 19:50:02.114652 | orchestrator |  "sdc": { 2025-05-31 19:50:02.115007 | orchestrator |  "osd_lvm_uuid": "b37e5891-99ec-5ce8-8fa7-674876c21edd" 2025-05-31 19:50:02.117718 | orchestrator |  } 2025-05-31 19:50:02.118005 | orchestrator |  } 2025-05-31 19:50:02.118704 | orchestrator | } 2025-05-31 19:50:02.119549 | orchestrator | 2025-05-31 19:50:02.119624 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-05-31 19:50:02.120385 | orchestrator | Saturday 31 May 2025 19:50:02 +0000 (0:00:00.185) 0:00:11.881 ********** 2025-05-31 19:50:02.243485 | orchestrator | skipping: [testbed-node-3] 2025-05-31 19:50:02.244964 | orchestrator | 2025-05-31 19:50:02.246221 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-05-31 19:50:02.247256 | orchestrator | Saturday 31 May 2025 19:50:02 +0000 (0:00:00.137) 0:00:12.018 ********** 2025-05-31 19:50:02.362824 | orchestrator | skipping: [testbed-node-3] 2025-05-31 19:50:02.364046 | orchestrator | 2025-05-31 19:50:02.364648 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-05-31 19:50:02.365400 | orchestrator | Saturday 31 May 2025 19:50:02 +0000 (0:00:00.121) 0:00:12.139 ********** 2025-05-31 19:50:02.503719 | orchestrator | skipping: [testbed-node-3] 2025-05-31 19:50:02.503909 | orchestrator | 2025-05-31 19:50:02.504555 | orchestrator | TASK [Print configuration data] ************************************************ 2025-05-31 19:50:02.505473 | orchestrator | Saturday 31 May 2025 19:50:02 +0000 (0:00:00.140) 0:00:12.279 ********** 2025-05-31 19:50:02.726426 | orchestrator | changed: [testbed-node-3] => { 2025-05-31 19:50:02.726605 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-05-31 19:50:02.726623 | orchestrator |  "ceph_osd_devices": { 2025-05-31 19:50:02.727066 | orchestrator |  "sdb": { 2025-05-31 19:50:02.728266 | orchestrator |  "osd_lvm_uuid": "813d0644-8ada-5e52-b3d8-7484365c4567" 2025-05-31 19:50:02.728648 | orchestrator |  }, 2025-05-31 19:50:02.731028 | orchestrator |  "sdc": { 2025-05-31 19:50:02.731055 | orchestrator |  "osd_lvm_uuid": "b37e5891-99ec-5ce8-8fa7-674876c21edd" 2025-05-31 19:50:02.731091 | orchestrator |  } 2025-05-31 19:50:02.733981 | orchestrator |  }, 2025-05-31 19:50:02.734749 | orchestrator |  "lvm_volumes": [ 2025-05-31 19:50:02.736558 | orchestrator |  { 2025-05-31 19:50:02.737010 | orchestrator |  "data": "osd-block-813d0644-8ada-5e52-b3d8-7484365c4567", 2025-05-31 19:50:02.738177 | orchestrator |  "data_vg": "ceph-813d0644-8ada-5e52-b3d8-7484365c4567" 2025-05-31 19:50:02.738221 | orchestrator |  }, 2025-05-31 19:50:02.739005 | orchestrator |  { 2025-05-31 19:50:02.741440 | orchestrator |  "data": "osd-block-b37e5891-99ec-5ce8-8fa7-674876c21edd", 2025-05-31 19:50:02.742075 | orchestrator |  "data_vg": "ceph-b37e5891-99ec-5ce8-8fa7-674876c21edd" 2025-05-31 19:50:02.742306 | orchestrator |  } 2025-05-31 19:50:02.742918 | orchestrator |  ] 2025-05-31 19:50:02.743175 | orchestrator |  } 2025-05-31 19:50:02.743570 | orchestrator | } 2025-05-31 19:50:02.744032 | orchestrator | 2025-05-31 19:50:02.744251 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-05-31 19:50:02.744667 | orchestrator | Saturday 31 May 2025 19:50:02 +0000 (0:00:00.222) 0:00:12.502 ********** 2025-05-31 19:50:05.080383 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-31 19:50:05.080503 | orchestrator | 2025-05-31 19:50:05.088542 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-05-31 19:50:05.095256 | orchestrator | 2025-05-31 19:50:05.099313 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-31 19:50:05.102266 | orchestrator | Saturday 31 May 2025 19:50:05 +0000 (0:00:02.353) 0:00:14.855 ********** 2025-05-31 19:50:05.391492 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-05-31 19:50:05.391678 | orchestrator | 2025-05-31 19:50:05.391764 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-31 19:50:05.392081 | orchestrator | Saturday 31 May 2025 19:50:05 +0000 (0:00:00.311) 0:00:15.167 ********** 2025-05-31 19:50:05.634663 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:50:05.634847 | orchestrator | 2025-05-31 19:50:05.634946 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 19:50:05.636237 | orchestrator | Saturday 31 May 2025 19:50:05 +0000 (0:00:00.242) 0:00:15.409 ********** 2025-05-31 19:50:06.130009 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-05-31 19:50:06.131096 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-05-31 19:50:06.131141 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-05-31 19:50:06.131769 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-05-31 19:50:06.132410 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-05-31 19:50:06.132898 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-05-31 19:50:06.133569 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-05-31 19:50:06.134008 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-05-31 19:50:06.134487 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-05-31 19:50:06.135220 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-05-31 19:50:06.135820 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-05-31 19:50:06.136548 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-05-31 19:50:06.137385 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-05-31 19:50:06.138084 | orchestrator | 2025-05-31 19:50:06.138783 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 19:50:06.139591 | orchestrator | Saturday 31 May 2025 19:50:06 +0000 (0:00:00.480) 0:00:15.890 ********** 2025-05-31 19:50:06.343890 | orchestrator | skipping: [testbed-node-4] 2025-05-31 19:50:06.344114 | orchestrator | 2025-05-31 19:50:06.344678 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 19:50:06.345649 | orchestrator | Saturday 31 May 2025 19:50:06 +0000 (0:00:00.227) 0:00:16.117 ********** 2025-05-31 19:50:06.540058 | orchestrator | skipping: [testbed-node-4] 2025-05-31 19:50:06.541397 | orchestrator | 2025-05-31 19:50:06.542291 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 19:50:06.543046 | orchestrator | Saturday 31 May 2025 19:50:06 +0000 (0:00:00.197) 0:00:16.315 ********** 2025-05-31 19:50:06.736360 | orchestrator | skipping: [testbed-node-4] 2025-05-31 19:50:06.736595 | orchestrator | 2025-05-31 19:50:06.737999 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 19:50:06.738076 | orchestrator | Saturday 31 May 2025 19:50:06 +0000 (0:00:00.195) 0:00:16.511 ********** 2025-05-31 19:50:06.961300 | orchestrator | skipping: [testbed-node-4] 2025-05-31 19:50:06.961754 | orchestrator | 2025-05-31 19:50:06.963071 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 19:50:06.963104 | orchestrator | Saturday 31 May 2025 19:50:06 +0000 (0:00:00.226) 0:00:16.737 ********** 2025-05-31 19:50:07.590177 | orchestrator | skipping: [testbed-node-4] 2025-05-31 19:50:07.590461 | orchestrator | 2025-05-31 19:50:07.591489 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 19:50:07.592746 | orchestrator | Saturday 31 May 2025 19:50:07 +0000 (0:00:00.628) 0:00:17.365 ********** 2025-05-31 19:50:07.786846 | orchestrator | skipping: [testbed-node-4] 2025-05-31 19:50:07.788352 | orchestrator | 2025-05-31 19:50:07.795033 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 19:50:07.795086 | orchestrator | Saturday 31 May 2025 19:50:07 +0000 (0:00:00.192) 0:00:17.558 ********** 2025-05-31 19:50:07.989870 | orchestrator | skipping: [testbed-node-4] 2025-05-31 19:50:07.991449 | orchestrator | 2025-05-31 19:50:07.992512 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 19:50:07.993737 | orchestrator | Saturday 31 May 2025 19:50:07 +0000 (0:00:00.206) 0:00:17.764 ********** 2025-05-31 19:50:08.193216 | orchestrator | skipping: [testbed-node-4] 2025-05-31 19:50:08.193322 | orchestrator | 2025-05-31 19:50:08.194925 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 19:50:08.194938 | orchestrator | Saturday 31 May 2025 19:50:08 +0000 (0:00:00.204) 0:00:17.969 ********** 2025-05-31 19:50:08.622713 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_525ad027-7e06-4e85-bfbd-c3ec419229c5) 2025-05-31 19:50:08.624097 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_525ad027-7e06-4e85-bfbd-c3ec419229c5) 2025-05-31 19:50:08.624926 | orchestrator | 2025-05-31 19:50:08.626658 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 19:50:08.626688 | orchestrator | Saturday 31 May 2025 19:50:08 +0000 (0:00:00.426) 0:00:18.396 ********** 2025-05-31 19:50:09.046164 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_a9241271-625e-4229-94b1-3d99bba363ae) 2025-05-31 19:50:09.051563 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_a9241271-625e-4229-94b1-3d99bba363ae) 2025-05-31 19:50:09.051673 | orchestrator | 2025-05-31 19:50:09.051742 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 19:50:09.052775 | orchestrator | Saturday 31 May 2025 19:50:09 +0000 (0:00:00.423) 0:00:18.819 ********** 2025-05-31 19:50:09.513314 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_1a9ee9a4-914c-40fd-b835-c38474fb60e8) 2025-05-31 19:50:09.514679 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_1a9ee9a4-914c-40fd-b835-c38474fb60e8) 2025-05-31 19:50:09.517389 | orchestrator | 2025-05-31 19:50:09.517701 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 19:50:09.519205 | orchestrator | Saturday 31 May 2025 19:50:09 +0000 (0:00:00.467) 0:00:19.287 ********** 2025-05-31 19:50:09.947498 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_9b14a296-0b0f-456e-ac69-f453c0a27a39) 2025-05-31 19:50:09.949382 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_9b14a296-0b0f-456e-ac69-f453c0a27a39) 2025-05-31 19:50:09.952160 | orchestrator | 2025-05-31 19:50:09.952431 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 19:50:09.952454 | orchestrator | Saturday 31 May 2025 19:50:09 +0000 (0:00:00.434) 0:00:19.721 ********** 2025-05-31 19:50:10.278106 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-31 19:50:10.278844 | orchestrator | 2025-05-31 19:50:10.284450 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 19:50:10.284925 | orchestrator | Saturday 31 May 2025 19:50:10 +0000 (0:00:00.331) 0:00:20.052 ********** 2025-05-31 19:50:10.648613 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-05-31 19:50:10.648822 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-05-31 19:50:10.651230 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-05-31 19:50:10.652205 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-05-31 19:50:10.653926 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-05-31 19:50:10.656242 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-05-31 19:50:10.656280 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-05-31 19:50:10.656333 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-05-31 19:50:10.657780 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-05-31 19:50:10.658774 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-05-31 19:50:10.659595 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-05-31 19:50:10.660114 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-05-31 19:50:10.661053 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-05-31 19:50:10.661747 | orchestrator | 2025-05-31 19:50:10.663123 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 19:50:10.663148 | orchestrator | Saturday 31 May 2025 19:50:10 +0000 (0:00:00.369) 0:00:20.422 ********** 2025-05-31 19:50:10.849295 | orchestrator | skipping: [testbed-node-4] 2025-05-31 19:50:10.851047 | orchestrator | 2025-05-31 19:50:10.852218 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 19:50:10.853341 | orchestrator | Saturday 31 May 2025 19:50:10 +0000 (0:00:00.201) 0:00:20.623 ********** 2025-05-31 19:50:11.397944 | orchestrator | skipping: [testbed-node-4] 2025-05-31 19:50:11.398148 | orchestrator | 2025-05-31 19:50:11.398424 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 19:50:11.398629 | orchestrator | Saturday 31 May 2025 19:50:11 +0000 (0:00:00.546) 0:00:21.170 ********** 2025-05-31 19:50:11.584655 | orchestrator | skipping: [testbed-node-4] 2025-05-31 19:50:11.586659 | orchestrator | 2025-05-31 19:50:11.589484 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 19:50:11.589832 | orchestrator | Saturday 31 May 2025 19:50:11 +0000 (0:00:00.190) 0:00:21.361 ********** 2025-05-31 19:50:11.763662 | orchestrator | skipping: [testbed-node-4] 2025-05-31 19:50:11.764480 | orchestrator | 2025-05-31 19:50:11.767674 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 19:50:11.767730 | orchestrator | Saturday 31 May 2025 19:50:11 +0000 (0:00:00.178) 0:00:21.539 ********** 2025-05-31 19:50:11.934556 | orchestrator | skipping: [testbed-node-4] 2025-05-31 19:50:11.935183 | orchestrator | 2025-05-31 19:50:11.935859 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 19:50:11.936805 | orchestrator | Saturday 31 May 2025 19:50:11 +0000 (0:00:00.171) 0:00:21.711 ********** 2025-05-31 19:50:12.118442 | orchestrator | skipping: [testbed-node-4] 2025-05-31 19:50:12.120161 | orchestrator | 2025-05-31 19:50:12.122997 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 19:50:12.123022 | orchestrator | Saturday 31 May 2025 19:50:12 +0000 (0:00:00.182) 0:00:21.894 ********** 2025-05-31 19:50:12.313601 | orchestrator | skipping: [testbed-node-4] 2025-05-31 19:50:12.316963 | orchestrator | 2025-05-31 19:50:12.317817 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 19:50:12.319048 | orchestrator | Saturday 31 May 2025 19:50:12 +0000 (0:00:00.196) 0:00:22.091 ********** 2025-05-31 19:50:12.492747 | orchestrator | skipping: [testbed-node-4] 2025-05-31 19:50:12.493269 | orchestrator | 2025-05-31 19:50:12.494556 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 19:50:12.495479 | orchestrator | Saturday 31 May 2025 19:50:12 +0000 (0:00:00.176) 0:00:22.267 ********** 2025-05-31 19:50:13.066728 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-05-31 19:50:13.067658 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-05-31 19:50:13.071052 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-05-31 19:50:13.071881 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-05-31 19:50:13.072649 | orchestrator | 2025-05-31 19:50:13.073301 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 19:50:13.073812 | orchestrator | Saturday 31 May 2025 19:50:13 +0000 (0:00:00.576) 0:00:22.843 ********** 2025-05-31 19:50:13.247860 | orchestrator | skipping: [testbed-node-4] 2025-05-31 19:50:13.247968 | orchestrator | 2025-05-31 19:50:13.250452 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 19:50:13.251223 | orchestrator | Saturday 31 May 2025 19:50:13 +0000 (0:00:00.179) 0:00:23.023 ********** 2025-05-31 19:50:13.422746 | orchestrator | skipping: [testbed-node-4] 2025-05-31 19:50:13.426958 | orchestrator | 2025-05-31 19:50:13.431143 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 19:50:13.432657 | orchestrator | Saturday 31 May 2025 19:50:13 +0000 (0:00:00.175) 0:00:23.198 ********** 2025-05-31 19:50:13.601199 | orchestrator | skipping: [testbed-node-4] 2025-05-31 19:50:13.603705 | orchestrator | 2025-05-31 19:50:13.606366 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 19:50:13.607444 | orchestrator | Saturday 31 May 2025 19:50:13 +0000 (0:00:00.179) 0:00:23.377 ********** 2025-05-31 19:50:13.785090 | orchestrator | skipping: [testbed-node-4] 2025-05-31 19:50:13.785174 | orchestrator | 2025-05-31 19:50:13.787858 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-05-31 19:50:13.788740 | orchestrator | Saturday 31 May 2025 19:50:13 +0000 (0:00:00.181) 0:00:23.559 ********** 2025-05-31 19:50:14.034582 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-05-31 19:50:14.038350 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-05-31 19:50:14.039596 | orchestrator | 2025-05-31 19:50:14.041279 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-05-31 19:50:14.042143 | orchestrator | Saturday 31 May 2025 19:50:14 +0000 (0:00:00.250) 0:00:23.810 ********** 2025-05-31 19:50:14.163693 | orchestrator | skipping: [testbed-node-4] 2025-05-31 19:50:14.164010 | orchestrator | 2025-05-31 19:50:14.165123 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-05-31 19:50:14.168144 | orchestrator | Saturday 31 May 2025 19:50:14 +0000 (0:00:00.130) 0:00:23.940 ********** 2025-05-31 19:50:14.277275 | orchestrator | skipping: [testbed-node-4] 2025-05-31 19:50:14.281842 | orchestrator | 2025-05-31 19:50:14.282722 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-05-31 19:50:14.283304 | orchestrator | Saturday 31 May 2025 19:50:14 +0000 (0:00:00.113) 0:00:24.053 ********** 2025-05-31 19:50:14.406679 | orchestrator | skipping: [testbed-node-4] 2025-05-31 19:50:14.407831 | orchestrator | 2025-05-31 19:50:14.412205 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-05-31 19:50:14.413088 | orchestrator | Saturday 31 May 2025 19:50:14 +0000 (0:00:00.128) 0:00:24.182 ********** 2025-05-31 19:50:14.553304 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:50:14.554249 | orchestrator | 2025-05-31 19:50:14.559657 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-05-31 19:50:14.560631 | orchestrator | Saturday 31 May 2025 19:50:14 +0000 (0:00:00.146) 0:00:24.328 ********** 2025-05-31 19:50:14.738648 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7717ad38-094f-5aa6-8c39-f28029f817d5'}}) 2025-05-31 19:50:14.741022 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6fa9e552-f12f-547e-b45f-d034b93383af'}}) 2025-05-31 19:50:14.746586 | orchestrator | 2025-05-31 19:50:14.746997 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-05-31 19:50:14.748184 | orchestrator | Saturday 31 May 2025 19:50:14 +0000 (0:00:00.185) 0:00:24.514 ********** 2025-05-31 19:50:14.899015 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7717ad38-094f-5aa6-8c39-f28029f817d5'}})  2025-05-31 19:50:14.899208 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6fa9e552-f12f-547e-b45f-d034b93383af'}})  2025-05-31 19:50:14.900744 | orchestrator | skipping: [testbed-node-4] 2025-05-31 19:50:14.904713 | orchestrator | 2025-05-31 19:50:14.905562 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-05-31 19:50:14.906392 | orchestrator | Saturday 31 May 2025 19:50:14 +0000 (0:00:00.155) 0:00:24.669 ********** 2025-05-31 19:50:15.058998 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7717ad38-094f-5aa6-8c39-f28029f817d5'}})  2025-05-31 19:50:15.059730 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6fa9e552-f12f-547e-b45f-d034b93383af'}})  2025-05-31 19:50:15.065417 | orchestrator | skipping: [testbed-node-4] 2025-05-31 19:50:15.065501 | orchestrator | 2025-05-31 19:50:15.066555 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-05-31 19:50:15.067499 | orchestrator | Saturday 31 May 2025 19:50:15 +0000 (0:00:00.164) 0:00:24.833 ********** 2025-05-31 19:50:15.207502 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7717ad38-094f-5aa6-8c39-f28029f817d5'}})  2025-05-31 19:50:15.209411 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6fa9e552-f12f-547e-b45f-d034b93383af'}})  2025-05-31 19:50:15.211589 | orchestrator | skipping: [testbed-node-4] 2025-05-31 19:50:15.212120 | orchestrator | 2025-05-31 19:50:15.213572 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-05-31 19:50:15.215120 | orchestrator | Saturday 31 May 2025 19:50:15 +0000 (0:00:00.148) 0:00:24.982 ********** 2025-05-31 19:50:15.341176 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:50:15.342502 | orchestrator | 2025-05-31 19:50:15.344116 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-05-31 19:50:15.346307 | orchestrator | Saturday 31 May 2025 19:50:15 +0000 (0:00:00.133) 0:00:25.116 ********** 2025-05-31 19:50:15.480054 | orchestrator | ok: [testbed-node-4] 2025-05-31 19:50:15.483455 | orchestrator | 2025-05-31 19:50:15.484709 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-05-31 19:50:15.487719 | orchestrator | Saturday 31 May 2025 19:50:15 +0000 (0:00:00.135) 0:00:25.252 ********** 2025-05-31 19:50:15.614148 | orchestrator | skipping: [testbed-node-4] 2025-05-31 19:50:15.614672 | orchestrator | 2025-05-31 19:50:15.616877 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-05-31 19:50:15.618115 | orchestrator | Saturday 31 May 2025 19:50:15 +0000 (0:00:00.137) 0:00:25.390 ********** 2025-05-31 19:50:15.943067 | orchestrator | skipping: [testbed-node-4] 2025-05-31 19:50:15.943822 | orchestrator | 2025-05-31 19:50:15.944594 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-05-31 19:50:15.945688 | orchestrator | Saturday 31 May 2025 19:50:15 +0000 (0:00:00.329) 0:00:25.719 ********** 2025-05-31 19:50:16.076097 | orchestrator | skipping: [testbed-node-4] 2025-05-31 19:50:16.076183 | orchestrator | 2025-05-31 19:50:16.076192 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-05-31 19:50:16.076289 | orchestrator | Saturday 31 May 2025 19:50:16 +0000 (0:00:00.129) 0:00:25.849 ********** 2025-05-31 19:50:16.216722 | orchestrator | ok: [testbed-node-4] => { 2025-05-31 19:50:16.217379 | orchestrator |  "ceph_osd_devices": { 2025-05-31 19:50:16.218855 | orchestrator |  "sdb": { 2025-05-31 19:50:16.220221 | orchestrator |  "osd_lvm_uuid": "7717ad38-094f-5aa6-8c39-f28029f817d5" 2025-05-31 19:50:16.221418 | orchestrator |  }, 2025-05-31 19:50:16.222779 | orchestrator |  "sdc": { 2025-05-31 19:50:16.223235 | orchestrator |  "osd_lvm_uuid": "6fa9e552-f12f-547e-b45f-d034b93383af" 2025-05-31 19:50:16.224603 | orchestrator |  } 2025-05-31 19:50:16.225384 | orchestrator |  } 2025-05-31 19:50:16.226653 | orchestrator | } 2025-05-31 19:50:16.229174 | orchestrator | 2025-05-31 19:50:16.229197 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-05-31 19:50:16.229209 | orchestrator | Saturday 31 May 2025 19:50:16 +0000 (0:00:00.143) 0:00:25.992 ********** 2025-05-31 19:50:16.341960 | orchestrator | skipping: [testbed-node-4] 2025-05-31 19:50:16.342105 | orchestrator | 2025-05-31 19:50:16.342120 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-05-31 19:50:16.342999 | orchestrator | Saturday 31 May 2025 19:50:16 +0000 (0:00:00.123) 0:00:26.115 ********** 2025-05-31 19:50:16.467136 | orchestrator | skipping: [testbed-node-4] 2025-05-31 19:50:16.470913 | orchestrator | 2025-05-31 19:50:16.471090 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-05-31 19:50:16.473421 | orchestrator | Saturday 31 May 2025 19:50:16 +0000 (0:00:00.118) 0:00:26.234 ********** 2025-05-31 19:50:16.604469 | orchestrator | skipping: [testbed-node-4] 2025-05-31 19:50:16.604648 | orchestrator | 2025-05-31 19:50:16.606266 | orchestrator | TASK [Print configuration data] ************************************************ 2025-05-31 19:50:16.611783 | orchestrator | Saturday 31 May 2025 19:50:16 +0000 (0:00:00.146) 0:00:26.380 ********** 2025-05-31 19:50:16.800885 | orchestrator | changed: [testbed-node-4] => { 2025-05-31 19:50:16.803618 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-05-31 19:50:16.806968 | orchestrator |  "ceph_osd_devices": { 2025-05-31 19:50:16.808035 | orchestrator |  "sdb": { 2025-05-31 19:50:16.809038 | orchestrator |  "osd_lvm_uuid": "7717ad38-094f-5aa6-8c39-f28029f817d5" 2025-05-31 19:50:16.809860 | orchestrator |  }, 2025-05-31 19:50:16.810854 | orchestrator |  "sdc": { 2025-05-31 19:50:16.811494 | orchestrator |  "osd_lvm_uuid": "6fa9e552-f12f-547e-b45f-d034b93383af" 2025-05-31 19:50:16.813216 | orchestrator |  } 2025-05-31 19:50:16.813273 | orchestrator |  }, 2025-05-31 19:50:16.814408 | orchestrator |  "lvm_volumes": [ 2025-05-31 19:50:16.814697 | orchestrator |  { 2025-05-31 19:50:16.815591 | orchestrator |  "data": "osd-block-7717ad38-094f-5aa6-8c39-f28029f817d5", 2025-05-31 19:50:16.816745 | orchestrator |  "data_vg": "ceph-7717ad38-094f-5aa6-8c39-f28029f817d5" 2025-05-31 19:50:16.817225 | orchestrator |  }, 2025-05-31 19:50:16.817890 | orchestrator |  { 2025-05-31 19:50:16.818468 | orchestrator |  "data": "osd-block-6fa9e552-f12f-547e-b45f-d034b93383af", 2025-05-31 19:50:16.818913 | orchestrator |  "data_vg": "ceph-6fa9e552-f12f-547e-b45f-d034b93383af" 2025-05-31 19:50:16.819593 | orchestrator |  } 2025-05-31 19:50:16.820106 | orchestrator |  ] 2025-05-31 19:50:16.820563 | orchestrator |  } 2025-05-31 19:50:16.821238 | orchestrator | } 2025-05-31 19:50:16.821719 | orchestrator | 2025-05-31 19:50:16.822153 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-05-31 19:50:16.822856 | orchestrator | Saturday 31 May 2025 19:50:16 +0000 (0:00:00.195) 0:00:26.576 ********** 2025-05-31 19:50:17.847301 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-05-31 19:50:17.849633 | orchestrator | 2025-05-31 19:50:17.849710 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-05-31 19:50:17.849726 | orchestrator | 2025-05-31 19:50:17.850064 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-31 19:50:17.850828 | orchestrator | Saturday 31 May 2025 19:50:17 +0000 (0:00:01.034) 0:00:27.610 ********** 2025-05-31 19:50:18.298629 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-05-31 19:50:18.299151 | orchestrator | 2025-05-31 19:50:18.299643 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-31 19:50:18.300615 | orchestrator | Saturday 31 May 2025 19:50:18 +0000 (0:00:00.462) 0:00:28.073 ********** 2025-05-31 19:50:18.948731 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:50:18.949221 | orchestrator | 2025-05-31 19:50:18.952661 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 19:50:18.954137 | orchestrator | Saturday 31 May 2025 19:50:18 +0000 (0:00:00.648) 0:00:28.722 ********** 2025-05-31 19:50:19.323452 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-05-31 19:50:19.324872 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-05-31 19:50:19.325228 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-05-31 19:50:19.326975 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-05-31 19:50:19.328713 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-05-31 19:50:19.329949 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-05-31 19:50:19.331274 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-05-31 19:50:19.332835 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-05-31 19:50:19.333911 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-05-31 19:50:19.334945 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-05-31 19:50:19.336184 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-05-31 19:50:19.337239 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-05-31 19:50:19.337985 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-05-31 19:50:19.339130 | orchestrator | 2025-05-31 19:50:19.339771 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 19:50:19.340639 | orchestrator | Saturday 31 May 2025 19:50:19 +0000 (0:00:00.372) 0:00:29.094 ********** 2025-05-31 19:50:19.541776 | orchestrator | skipping: [testbed-node-5] 2025-05-31 19:50:19.545236 | orchestrator | 2025-05-31 19:50:19.546366 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 19:50:19.547853 | orchestrator | Saturday 31 May 2025 19:50:19 +0000 (0:00:00.220) 0:00:29.315 ********** 2025-05-31 19:50:19.760090 | orchestrator | skipping: [testbed-node-5] 2025-05-31 19:50:19.760743 | orchestrator | 2025-05-31 19:50:19.762171 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 19:50:19.765122 | orchestrator | Saturday 31 May 2025 19:50:19 +0000 (0:00:00.219) 0:00:29.534 ********** 2025-05-31 19:50:19.961903 | orchestrator | skipping: [testbed-node-5] 2025-05-31 19:50:19.963276 | orchestrator | 2025-05-31 19:50:19.964869 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 19:50:19.965980 | orchestrator | Saturday 31 May 2025 19:50:19 +0000 (0:00:00.202) 0:00:29.736 ********** 2025-05-31 19:50:20.173367 | orchestrator | skipping: [testbed-node-5] 2025-05-31 19:50:20.173496 | orchestrator | 2025-05-31 19:50:20.174799 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 19:50:20.177730 | orchestrator | Saturday 31 May 2025 19:50:20 +0000 (0:00:00.210) 0:00:29.947 ********** 2025-05-31 19:50:20.389079 | orchestrator | skipping: [testbed-node-5] 2025-05-31 19:50:20.390205 | orchestrator | 2025-05-31 19:50:20.391420 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 19:50:20.395824 | orchestrator | Saturday 31 May 2025 19:50:20 +0000 (0:00:00.216) 0:00:30.164 ********** 2025-05-31 19:50:20.618441 | orchestrator | skipping: [testbed-node-5] 2025-05-31 19:50:20.622489 | orchestrator | 2025-05-31 19:50:20.622722 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 19:50:20.623781 | orchestrator | Saturday 31 May 2025 19:50:20 +0000 (0:00:00.228) 0:00:30.393 ********** 2025-05-31 19:50:20.812660 | orchestrator | skipping: [testbed-node-5] 2025-05-31 19:50:20.813456 | orchestrator | 2025-05-31 19:50:20.815304 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 19:50:20.816273 | orchestrator | Saturday 31 May 2025 19:50:20 +0000 (0:00:00.195) 0:00:30.588 ********** 2025-05-31 19:50:21.023721 | orchestrator | skipping: [testbed-node-5] 2025-05-31 19:50:21.023798 | orchestrator | 2025-05-31 19:50:21.023804 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 19:50:21.024308 | orchestrator | Saturday 31 May 2025 19:50:21 +0000 (0:00:00.210) 0:00:30.799 ********** 2025-05-31 19:50:21.680304 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_fbeeb0f2-d22c-4c2b-ae21-9c150f637ac0) 2025-05-31 19:50:21.680438 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_fbeeb0f2-d22c-4c2b-ae21-9c150f637ac0) 2025-05-31 19:50:21.680612 | orchestrator | 2025-05-31 19:50:21.684197 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 19:50:21.684243 | orchestrator | Saturday 31 May 2025 19:50:21 +0000 (0:00:00.652) 0:00:31.452 ********** 2025-05-31 19:50:22.469306 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_6d52f885-97ca-45c7-bd6a-7862e27ed465) 2025-05-31 19:50:22.469942 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_6d52f885-97ca-45c7-bd6a-7862e27ed465) 2025-05-31 19:50:22.471601 | orchestrator | 2025-05-31 19:50:22.472309 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 19:50:22.474599 | orchestrator | Saturday 31 May 2025 19:50:22 +0000 (0:00:00.793) 0:00:32.245 ********** 2025-05-31 19:50:22.959597 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_727d26bd-0ead-422c-920c-32fac6429b39) 2025-05-31 19:50:22.960690 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_727d26bd-0ead-422c-920c-32fac6429b39) 2025-05-31 19:50:22.961005 | orchestrator | 2025-05-31 19:50:22.962427 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 19:50:22.962875 | orchestrator | Saturday 31 May 2025 19:50:22 +0000 (0:00:00.488) 0:00:32.734 ********** 2025-05-31 19:50:23.423064 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_d4f6392d-f8e1-4809-8c10-779f08f2c642) 2025-05-31 19:50:23.423172 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_d4f6392d-f8e1-4809-8c10-779f08f2c642) 2025-05-31 19:50:23.423249 | orchestrator | 2025-05-31 19:50:23.424712 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 19:50:23.425753 | orchestrator | Saturday 31 May 2025 19:50:23 +0000 (0:00:00.461) 0:00:33.196 ********** 2025-05-31 19:50:23.773135 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-31 19:50:23.773312 | orchestrator | 2025-05-31 19:50:23.773886 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 19:50:23.774894 | orchestrator | Saturday 31 May 2025 19:50:23 +0000 (0:00:00.351) 0:00:33.547 ********** 2025-05-31 19:50:24.164445 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-05-31 19:50:24.167329 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-05-31 19:50:24.169461 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-05-31 19:50:24.169492 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-05-31 19:50:24.169743 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-05-31 19:50:24.170506 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-05-31 19:50:24.170563 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-05-31 19:50:24.170754 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-05-31 19:50:24.170774 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-05-31 19:50:24.171021 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-05-31 19:50:24.171249 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-05-31 19:50:24.172436 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-05-31 19:50:24.172838 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-05-31 19:50:24.173131 | orchestrator | 2025-05-31 19:50:24.173608 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 19:50:24.173979 | orchestrator | Saturday 31 May 2025 19:50:24 +0000 (0:00:00.392) 0:00:33.939 ********** 2025-05-31 19:50:24.372885 | orchestrator | skipping: [testbed-node-5] 2025-05-31 19:50:24.373816 | orchestrator | 2025-05-31 19:50:24.375382 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 19:50:24.377382 | orchestrator | Saturday 31 May 2025 19:50:24 +0000 (0:00:00.209) 0:00:34.149 ********** 2025-05-31 19:50:24.580567 | orchestrator | skipping: [testbed-node-5] 2025-05-31 19:50:24.581288 | orchestrator | 2025-05-31 19:50:24.582556 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 19:50:24.583794 | orchestrator | Saturday 31 May 2025 19:50:24 +0000 (0:00:00.206) 0:00:34.355 ********** 2025-05-31 19:50:24.795687 | orchestrator | skipping: [testbed-node-5] 2025-05-31 19:50:24.796703 | orchestrator | 2025-05-31 19:50:24.796942 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 19:50:24.797790 | orchestrator | Saturday 31 May 2025 19:50:24 +0000 (0:00:00.215) 0:00:34.571 ********** 2025-05-31 19:50:24.999651 | orchestrator | skipping: [testbed-node-5] 2025-05-31 19:50:24.999751 | orchestrator | 2025-05-31 19:50:24.999761 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 19:50:24.999772 | orchestrator | Saturday 31 May 2025 19:50:24 +0000 (0:00:00.202) 0:00:34.774 ********** 2025-05-31 19:50:25.197320 | orchestrator | skipping: [testbed-node-5] 2025-05-31 19:50:25.198095 | orchestrator | 2025-05-31 19:50:25.199950 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 19:50:25.202957 | orchestrator | Saturday 31 May 2025 19:50:25 +0000 (0:00:00.198) 0:00:34.972 ********** 2025-05-31 19:50:25.936871 | orchestrator | skipping: [testbed-node-5] 2025-05-31 19:50:25.936981 | orchestrator | 2025-05-31 19:50:25.937683 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 19:50:25.938731 | orchestrator | Saturday 31 May 2025 19:50:25 +0000 (0:00:00.736) 0:00:35.709 ********** 2025-05-31 19:50:26.133504 | orchestrator | skipping: [testbed-node-5] 2025-05-31 19:50:26.133635 | orchestrator | 2025-05-31 19:50:26.134320 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 19:50:26.135305 | orchestrator | Saturday 31 May 2025 19:50:26 +0000 (0:00:00.200) 0:00:35.909 ********** 2025-05-31 19:50:26.332678 | orchestrator | skipping: [testbed-node-5] 2025-05-31 19:50:26.332960 | orchestrator | 2025-05-31 19:50:26.334043 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 19:50:26.335722 | orchestrator | Saturday 31 May 2025 19:50:26 +0000 (0:00:00.198) 0:00:36.108 ********** 2025-05-31 19:50:26.986191 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-05-31 19:50:26.989257 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-05-31 19:50:26.990331 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-05-31 19:50:26.991188 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-05-31 19:50:26.991266 | orchestrator | 2025-05-31 19:50:26.992694 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 19:50:26.992902 | orchestrator | Saturday 31 May 2025 19:50:26 +0000 (0:00:00.651) 0:00:36.760 ********** 2025-05-31 19:50:27.199936 | orchestrator | skipping: [testbed-node-5] 2025-05-31 19:50:27.201217 | orchestrator | 2025-05-31 19:50:27.202846 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 19:50:27.203669 | orchestrator | Saturday 31 May 2025 19:50:27 +0000 (0:00:00.214) 0:00:36.974 ********** 2025-05-31 19:50:27.394413 | orchestrator | skipping: [testbed-node-5] 2025-05-31 19:50:27.395822 | orchestrator | 2025-05-31 19:50:27.396112 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 19:50:27.396780 | orchestrator | Saturday 31 May 2025 19:50:27 +0000 (0:00:00.196) 0:00:37.170 ********** 2025-05-31 19:50:27.580810 | orchestrator | skipping: [testbed-node-5] 2025-05-31 19:50:27.581416 | orchestrator | 2025-05-31 19:50:27.582111 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 19:50:27.582967 | orchestrator | Saturday 31 May 2025 19:50:27 +0000 (0:00:00.186) 0:00:37.357 ********** 2025-05-31 19:50:27.780696 | orchestrator | skipping: [testbed-node-5] 2025-05-31 19:50:27.781248 | orchestrator | 2025-05-31 19:50:27.781422 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-05-31 19:50:27.782141 | orchestrator | Saturday 31 May 2025 19:50:27 +0000 (0:00:00.199) 0:00:37.556 ********** 2025-05-31 19:50:27.949961 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-05-31 19:50:27.950613 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-05-31 19:50:27.951903 | orchestrator | 2025-05-31 19:50:27.952896 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-05-31 19:50:27.953915 | orchestrator | Saturday 31 May 2025 19:50:27 +0000 (0:00:00.169) 0:00:37.726 ********** 2025-05-31 19:50:28.063965 | orchestrator | skipping: [testbed-node-5] 2025-05-31 19:50:28.064963 | orchestrator | 2025-05-31 19:50:28.066795 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-05-31 19:50:28.067177 | orchestrator | Saturday 31 May 2025 19:50:28 +0000 (0:00:00.113) 0:00:37.839 ********** 2025-05-31 19:50:28.199653 | orchestrator | skipping: [testbed-node-5] 2025-05-31 19:50:28.200684 | orchestrator | 2025-05-31 19:50:28.202213 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-05-31 19:50:28.203302 | orchestrator | Saturday 31 May 2025 19:50:28 +0000 (0:00:00.135) 0:00:37.974 ********** 2025-05-31 19:50:28.330136 | orchestrator | skipping: [testbed-node-5] 2025-05-31 19:50:28.331135 | orchestrator | 2025-05-31 19:50:28.331779 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-05-31 19:50:28.332489 | orchestrator | Saturday 31 May 2025 19:50:28 +0000 (0:00:00.129) 0:00:38.104 ********** 2025-05-31 19:50:28.648186 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:50:28.651679 | orchestrator | 2025-05-31 19:50:28.651740 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-05-31 19:50:28.651791 | orchestrator | Saturday 31 May 2025 19:50:28 +0000 (0:00:00.318) 0:00:38.423 ********** 2025-05-31 19:50:28.810990 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'edfa5e9a-3f1a-54c1-83f4-345bb781a14b'}}) 2025-05-31 19:50:28.812779 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a23536e0-7351-5f09-a3c0-98b1bc7f8fff'}}) 2025-05-31 19:50:28.814615 | orchestrator | 2025-05-31 19:50:28.815482 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-05-31 19:50:28.815949 | orchestrator | Saturday 31 May 2025 19:50:28 +0000 (0:00:00.163) 0:00:38.586 ********** 2025-05-31 19:50:28.954273 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'edfa5e9a-3f1a-54c1-83f4-345bb781a14b'}})  2025-05-31 19:50:28.955672 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a23536e0-7351-5f09-a3c0-98b1bc7f8fff'}})  2025-05-31 19:50:28.960109 | orchestrator | skipping: [testbed-node-5] 2025-05-31 19:50:28.960138 | orchestrator | 2025-05-31 19:50:28.960150 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-05-31 19:50:28.960567 | orchestrator | Saturday 31 May 2025 19:50:28 +0000 (0:00:00.143) 0:00:38.729 ********** 2025-05-31 19:50:29.114629 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'edfa5e9a-3f1a-54c1-83f4-345bb781a14b'}})  2025-05-31 19:50:29.119037 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a23536e0-7351-5f09-a3c0-98b1bc7f8fff'}})  2025-05-31 19:50:29.119192 | orchestrator | skipping: [testbed-node-5] 2025-05-31 19:50:29.119224 | orchestrator | 2025-05-31 19:50:29.119454 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-05-31 19:50:29.121447 | orchestrator | Saturday 31 May 2025 19:50:29 +0000 (0:00:00.160) 0:00:38.890 ********** 2025-05-31 19:50:29.260305 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'edfa5e9a-3f1a-54c1-83f4-345bb781a14b'}})  2025-05-31 19:50:29.260638 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a23536e0-7351-5f09-a3c0-98b1bc7f8fff'}})  2025-05-31 19:50:29.262981 | orchestrator | skipping: [testbed-node-5] 2025-05-31 19:50:29.266722 | orchestrator | 2025-05-31 19:50:29.269744 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-05-31 19:50:29.270579 | orchestrator | Saturday 31 May 2025 19:50:29 +0000 (0:00:00.146) 0:00:39.036 ********** 2025-05-31 19:50:29.416632 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:50:29.417479 | orchestrator | 2025-05-31 19:50:29.420991 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-05-31 19:50:29.421966 | orchestrator | Saturday 31 May 2025 19:50:29 +0000 (0:00:00.151) 0:00:39.188 ********** 2025-05-31 19:50:29.555740 | orchestrator | ok: [testbed-node-5] 2025-05-31 19:50:29.556105 | orchestrator | 2025-05-31 19:50:29.557653 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-05-31 19:50:29.558092 | orchestrator | Saturday 31 May 2025 19:50:29 +0000 (0:00:00.143) 0:00:39.332 ********** 2025-05-31 19:50:29.699680 | orchestrator | skipping: [testbed-node-5] 2025-05-31 19:50:29.700279 | orchestrator | 2025-05-31 19:50:29.703803 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-05-31 19:50:29.704194 | orchestrator | Saturday 31 May 2025 19:50:29 +0000 (0:00:00.139) 0:00:39.472 ********** 2025-05-31 19:50:29.820309 | orchestrator | skipping: [testbed-node-5] 2025-05-31 19:50:29.821636 | orchestrator | 2025-05-31 19:50:29.823887 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-05-31 19:50:29.827315 | orchestrator | Saturday 31 May 2025 19:50:29 +0000 (0:00:00.124) 0:00:39.596 ********** 2025-05-31 19:50:29.962639 | orchestrator | skipping: [testbed-node-5] 2025-05-31 19:50:29.963949 | orchestrator | 2025-05-31 19:50:29.965565 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-05-31 19:50:29.966633 | orchestrator | Saturday 31 May 2025 19:50:29 +0000 (0:00:00.139) 0:00:39.736 ********** 2025-05-31 19:50:30.099421 | orchestrator | ok: [testbed-node-5] => { 2025-05-31 19:50:30.100954 | orchestrator |  "ceph_osd_devices": { 2025-05-31 19:50:30.102775 | orchestrator |  "sdb": { 2025-05-31 19:50:30.104446 | orchestrator |  "osd_lvm_uuid": "edfa5e9a-3f1a-54c1-83f4-345bb781a14b" 2025-05-31 19:50:30.106098 | orchestrator |  }, 2025-05-31 19:50:30.107520 | orchestrator |  "sdc": { 2025-05-31 19:50:30.108093 | orchestrator |  "osd_lvm_uuid": "a23536e0-7351-5f09-a3c0-98b1bc7f8fff" 2025-05-31 19:50:30.109245 | orchestrator |  } 2025-05-31 19:50:30.110629 | orchestrator |  } 2025-05-31 19:50:30.111887 | orchestrator | } 2025-05-31 19:50:30.113085 | orchestrator | 2025-05-31 19:50:30.113895 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-05-31 19:50:30.114461 | orchestrator | Saturday 31 May 2025 19:50:30 +0000 (0:00:00.137) 0:00:39.874 ********** 2025-05-31 19:50:30.236418 | orchestrator | skipping: [testbed-node-5] 2025-05-31 19:50:30.239227 | orchestrator | 2025-05-31 19:50:30.239870 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-05-31 19:50:30.241792 | orchestrator | Saturday 31 May 2025 19:50:30 +0000 (0:00:00.136) 0:00:40.011 ********** 2025-05-31 19:50:30.555854 | orchestrator | skipping: [testbed-node-5] 2025-05-31 19:50:30.558316 | orchestrator | 2025-05-31 19:50:30.559626 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-05-31 19:50:30.561128 | orchestrator | Saturday 31 May 2025 19:50:30 +0000 (0:00:00.320) 0:00:40.332 ********** 2025-05-31 19:50:30.691612 | orchestrator | skipping: [testbed-node-5] 2025-05-31 19:50:30.694242 | orchestrator | 2025-05-31 19:50:30.695452 | orchestrator | TASK [Print configuration data] ************************************************ 2025-05-31 19:50:30.696719 | orchestrator | Saturday 31 May 2025 19:50:30 +0000 (0:00:00.134) 0:00:40.466 ********** 2025-05-31 19:50:30.897739 | orchestrator | changed: [testbed-node-5] => { 2025-05-31 19:50:30.899675 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-05-31 19:50:30.902211 | orchestrator |  "ceph_osd_devices": { 2025-05-31 19:50:30.904287 | orchestrator |  "sdb": { 2025-05-31 19:50:30.905648 | orchestrator |  "osd_lvm_uuid": "edfa5e9a-3f1a-54c1-83f4-345bb781a14b" 2025-05-31 19:50:30.907264 | orchestrator |  }, 2025-05-31 19:50:30.908949 | orchestrator |  "sdc": { 2025-05-31 19:50:30.910228 | orchestrator |  "osd_lvm_uuid": "a23536e0-7351-5f09-a3c0-98b1bc7f8fff" 2025-05-31 19:50:30.911484 | orchestrator |  } 2025-05-31 19:50:30.912671 | orchestrator |  }, 2025-05-31 19:50:30.914102 | orchestrator |  "lvm_volumes": [ 2025-05-31 19:50:30.914742 | orchestrator |  { 2025-05-31 19:50:30.916124 | orchestrator |  "data": "osd-block-edfa5e9a-3f1a-54c1-83f4-345bb781a14b", 2025-05-31 19:50:30.917120 | orchestrator |  "data_vg": "ceph-edfa5e9a-3f1a-54c1-83f4-345bb781a14b" 2025-05-31 19:50:30.917905 | orchestrator |  }, 2025-05-31 19:50:30.919218 | orchestrator |  { 2025-05-31 19:50:30.921293 | orchestrator |  "data": "osd-block-a23536e0-7351-5f09-a3c0-98b1bc7f8fff", 2025-05-31 19:50:30.921725 | orchestrator |  "data_vg": "ceph-a23536e0-7351-5f09-a3c0-98b1bc7f8fff" 2025-05-31 19:50:30.922640 | orchestrator |  } 2025-05-31 19:50:30.924633 | orchestrator |  ] 2025-05-31 19:50:30.926093 | orchestrator |  } 2025-05-31 19:50:30.926833 | orchestrator | } 2025-05-31 19:50:30.927606 | orchestrator | 2025-05-31 19:50:30.928259 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-05-31 19:50:30.928993 | orchestrator | Saturday 31 May 2025 19:50:30 +0000 (0:00:00.206) 0:00:40.672 ********** 2025-05-31 19:50:31.805842 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-05-31 19:50:31.807518 | orchestrator | 2025-05-31 19:50:31.808012 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 19:50:31.808980 | orchestrator | 2025-05-31 19:50:31 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-31 19:50:31.809001 | orchestrator | 2025-05-31 19:50:31 | INFO  | Please wait and do not abort execution. 2025-05-31 19:50:31.810302 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-05-31 19:50:31.811422 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-05-31 19:50:31.811466 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-05-31 19:50:31.813740 | orchestrator | 2025-05-31 19:50:31.813774 | orchestrator | 2025-05-31 19:50:31.813786 | orchestrator | 2025-05-31 19:50:31.813854 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-31 19:50:31.816252 | orchestrator | Saturday 31 May 2025 19:50:31 +0000 (0:00:00.907) 0:00:41.580 ********** 2025-05-31 19:50:31.816282 | orchestrator | =============================================================================== 2025-05-31 19:50:31.816293 | orchestrator | Write configuration file ------------------------------------------------ 4.30s 2025-05-31 19:50:31.816712 | orchestrator | Add known links to the list of available block devices ------------------ 1.21s 2025-05-31 19:50:31.818163 | orchestrator | Get initial list of available block devices ----------------------------- 1.13s 2025-05-31 19:50:31.818613 | orchestrator | Add known partitions to the list of available block devices ------------- 1.11s 2025-05-31 19:50:31.820217 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.07s 2025-05-31 19:50:31.820933 | orchestrator | Add known partitions to the list of available block devices ------------- 1.02s 2025-05-31 19:50:31.821050 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.91s 2025-05-31 19:50:31.821302 | orchestrator | Add known links to the list of available block devices ------------------ 0.79s 2025-05-31 19:50:31.822657 | orchestrator | Add known partitions to the list of available block devices ------------- 0.74s 2025-05-31 19:50:31.825182 | orchestrator | Add known links to the list of available block devices ------------------ 0.65s 2025-05-31 19:50:31.825216 | orchestrator | Add known partitions to the list of available block devices ------------- 0.65s 2025-05-31 19:50:31.825228 | orchestrator | Add known links to the list of available block devices ------------------ 0.63s 2025-05-31 19:50:31.825260 | orchestrator | Print configuration data ------------------------------------------------ 0.62s 2025-05-31 19:50:31.825334 | orchestrator | Set WAL devices config data --------------------------------------------- 0.60s 2025-05-31 19:50:31.825697 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.60s 2025-05-31 19:50:31.827217 | orchestrator | Define lvm_volumes structures ------------------------------------------- 0.60s 2025-05-31 19:50:31.827761 | orchestrator | Add known partitions to the list of available block devices ------------- 0.58s 2025-05-31 19:50:31.827910 | orchestrator | Add known links to the list of available block devices ------------------ 0.58s 2025-05-31 19:50:31.829240 | orchestrator | Add known links to the list of available block devices ------------------ 0.57s 2025-05-31 19:50:31.829592 | orchestrator | Print DB devices -------------------------------------------------------- 0.56s 2025-05-31 19:50:44.183774 | orchestrator | Registering Redlock._acquired_script 2025-05-31 19:50:44.183889 | orchestrator | Registering Redlock._extend_script 2025-05-31 19:50:44.183905 | orchestrator | Registering Redlock._release_script 2025-05-31 19:50:44.235204 | orchestrator | 2025-05-31 19:50:44 | INFO  | Task f7a13b43-3933-4e81-b1cd-c66669ec214c (sync inventory) is running in background. Output coming soon. 2025-05-31 20:50:46.883259 | orchestrator | 2025-05-31 20:50:46 | INFO  | Task f4e21570-e7d8-4e53-9b3a-caef2ea67008 (ceph-create-lvm-devices) was prepared for execution. 2025-05-31 20:50:46.884802 | orchestrator | 2025-05-31 20:50:46 | INFO  | It takes a moment until task f4e21570-e7d8-4e53-9b3a-caef2ea67008 (ceph-create-lvm-devices) has been started and output is visible here. 2025-05-31 20:50:51.019222 | orchestrator | 2025-05-31 20:50:51.021601 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-05-31 20:50:51.021660 | orchestrator | 2025-05-31 20:50:51.021862 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-31 20:50:51.023367 | orchestrator | Saturday 31 May 2025 20:50:51 +0000 (0:00:00.324) 0:00:00.324 ********** 2025-05-31 20:50:51.257689 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-31 20:50:51.257797 | orchestrator | 2025-05-31 20:50:51.257827 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-31 20:50:51.258688 | orchestrator | Saturday 31 May 2025 20:50:51 +0000 (0:00:00.240) 0:00:00.565 ********** 2025-05-31 20:50:51.472097 | orchestrator | ok: [testbed-node-3] 2025-05-31 20:50:51.472466 | orchestrator | 2025-05-31 20:50:51.474167 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 20:50:51.475215 | orchestrator | Saturday 31 May 2025 20:50:51 +0000 (0:00:00.214) 0:00:00.780 ********** 2025-05-31 20:50:51.854286 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-05-31 20:50:51.854389 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-05-31 20:50:51.854773 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-05-31 20:50:51.855706 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-05-31 20:50:51.858779 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-05-31 20:50:51.858831 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-05-31 20:50:51.859133 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-05-31 20:50:51.860234 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-05-31 20:50:51.860969 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-05-31 20:50:51.861697 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-05-31 20:50:51.862268 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-05-31 20:50:51.862875 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-05-31 20:50:51.863699 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-05-31 20:50:51.864157 | orchestrator | 2025-05-31 20:50:51.864828 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 20:50:51.865934 | orchestrator | Saturday 31 May 2025 20:50:51 +0000 (0:00:00.381) 0:00:01.161 ********** 2025-05-31 20:50:52.302562 | orchestrator | skipping: [testbed-node-3] 2025-05-31 20:50:52.303001 | orchestrator | 2025-05-31 20:50:52.304881 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 20:50:52.305698 | orchestrator | Saturday 31 May 2025 20:50:52 +0000 (0:00:00.447) 0:00:01.609 ********** 2025-05-31 20:50:52.523609 | orchestrator | skipping: [testbed-node-3] 2025-05-31 20:50:52.523717 | orchestrator | 2025-05-31 20:50:52.527273 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 20:50:52.529840 | orchestrator | Saturday 31 May 2025 20:50:52 +0000 (0:00:00.218) 0:00:01.827 ********** 2025-05-31 20:50:52.711516 | orchestrator | skipping: [testbed-node-3] 2025-05-31 20:50:52.712220 | orchestrator | 2025-05-31 20:50:52.713289 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 20:50:52.714623 | orchestrator | Saturday 31 May 2025 20:50:52 +0000 (0:00:00.192) 0:00:02.020 ********** 2025-05-31 20:50:52.891646 | orchestrator | skipping: [testbed-node-3] 2025-05-31 20:50:52.891940 | orchestrator | 2025-05-31 20:50:52.892553 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 20:50:52.893309 | orchestrator | Saturday 31 May 2025 20:50:52 +0000 (0:00:00.180) 0:00:02.200 ********** 2025-05-31 20:50:53.089750 | orchestrator | skipping: [testbed-node-3] 2025-05-31 20:50:53.090595 | orchestrator | 2025-05-31 20:50:53.091323 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 20:50:53.093314 | orchestrator | Saturday 31 May 2025 20:50:53 +0000 (0:00:00.196) 0:00:02.397 ********** 2025-05-31 20:50:53.276319 | orchestrator | skipping: [testbed-node-3] 2025-05-31 20:50:53.276597 | orchestrator | 2025-05-31 20:50:53.277495 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 20:50:53.277953 | orchestrator | Saturday 31 May 2025 20:50:53 +0000 (0:00:00.186) 0:00:02.584 ********** 2025-05-31 20:50:53.501995 | orchestrator | skipping: [testbed-node-3] 2025-05-31 20:50:53.502224 | orchestrator | 2025-05-31 20:50:53.504203 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 20:50:53.505153 | orchestrator | Saturday 31 May 2025 20:50:53 +0000 (0:00:00.223) 0:00:02.808 ********** 2025-05-31 20:50:53.675237 | orchestrator | skipping: [testbed-node-3] 2025-05-31 20:50:53.676023 | orchestrator | 2025-05-31 20:50:53.676915 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 20:50:53.678183 | orchestrator | Saturday 31 May 2025 20:50:53 +0000 (0:00:00.175) 0:00:02.983 ********** 2025-05-31 20:50:54.067381 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_8bbc83dc-8efd-4101-bb1b-0d6e4523fdf4) 2025-05-31 20:50:54.068041 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_8bbc83dc-8efd-4101-bb1b-0d6e4523fdf4) 2025-05-31 20:50:54.068851 | orchestrator | 2025-05-31 20:50:54.069231 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 20:50:54.069707 | orchestrator | Saturday 31 May 2025 20:50:54 +0000 (0:00:00.392) 0:00:03.376 ********** 2025-05-31 20:50:54.458663 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_191d8892-ecee-415a-8f71-2d93b7558573) 2025-05-31 20:50:54.459598 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_191d8892-ecee-415a-8f71-2d93b7558573) 2025-05-31 20:50:54.462792 | orchestrator | 2025-05-31 20:50:54.462825 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 20:50:54.463403 | orchestrator | Saturday 31 May 2025 20:50:54 +0000 (0:00:00.390) 0:00:03.767 ********** 2025-05-31 20:50:55.065851 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_fb66f732-34d2-45e3-b1b8-d9ba2a3ac758) 2025-05-31 20:50:55.066105 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_fb66f732-34d2-45e3-b1b8-d9ba2a3ac758) 2025-05-31 20:50:55.066299 | orchestrator | 2025-05-31 20:50:55.066839 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 20:50:55.067572 | orchestrator | Saturday 31 May 2025 20:50:55 +0000 (0:00:00.607) 0:00:04.374 ********** 2025-05-31 20:50:55.690300 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_5a7b16a5-b25a-49dc-b8e1-bfe6cbb00610) 2025-05-31 20:50:55.690421 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_5a7b16a5-b25a-49dc-b8e1-bfe6cbb00610) 2025-05-31 20:50:55.690962 | orchestrator | 2025-05-31 20:50:55.691612 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 20:50:55.692138 | orchestrator | Saturday 31 May 2025 20:50:55 +0000 (0:00:00.623) 0:00:04.998 ********** 2025-05-31 20:50:56.373557 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-31 20:50:56.373726 | orchestrator | 2025-05-31 20:50:56.374555 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 20:50:56.375643 | orchestrator | Saturday 31 May 2025 20:50:56 +0000 (0:00:00.682) 0:00:05.680 ********** 2025-05-31 20:50:56.781402 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-05-31 20:50:56.782206 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-05-31 20:50:56.783392 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-05-31 20:50:56.783717 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-05-31 20:50:56.784669 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-05-31 20:50:56.785582 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-05-31 20:50:56.786270 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-05-31 20:50:56.787152 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-05-31 20:50:56.787541 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-05-31 20:50:56.788359 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-05-31 20:50:56.788784 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-05-31 20:50:56.789619 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-05-31 20:50:56.789963 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-05-31 20:50:56.790658 | orchestrator | 2025-05-31 20:50:56.791361 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 20:50:56.791766 | orchestrator | Saturday 31 May 2025 20:50:56 +0000 (0:00:00.408) 0:00:06.089 ********** 2025-05-31 20:50:56.974581 | orchestrator | skipping: [testbed-node-3] 2025-05-31 20:50:56.975706 | orchestrator | 2025-05-31 20:50:56.976787 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 20:50:56.977966 | orchestrator | Saturday 31 May 2025 20:50:56 +0000 (0:00:00.193) 0:00:06.282 ********** 2025-05-31 20:50:57.200763 | orchestrator | skipping: [testbed-node-3] 2025-05-31 20:50:57.201279 | orchestrator | 2025-05-31 20:50:57.202176 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 20:50:57.203017 | orchestrator | Saturday 31 May 2025 20:50:57 +0000 (0:00:00.225) 0:00:06.508 ********** 2025-05-31 20:50:57.395002 | orchestrator | skipping: [testbed-node-3] 2025-05-31 20:50:57.395404 | orchestrator | 2025-05-31 20:50:57.396207 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 20:50:57.397153 | orchestrator | Saturday 31 May 2025 20:50:57 +0000 (0:00:00.195) 0:00:06.704 ********** 2025-05-31 20:50:57.583493 | orchestrator | skipping: [testbed-node-3] 2025-05-31 20:50:57.583713 | orchestrator | 2025-05-31 20:50:57.584566 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 20:50:57.585583 | orchestrator | Saturday 31 May 2025 20:50:57 +0000 (0:00:00.188) 0:00:06.892 ********** 2025-05-31 20:50:57.769692 | orchestrator | skipping: [testbed-node-3] 2025-05-31 20:50:57.770674 | orchestrator | 2025-05-31 20:50:57.771315 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 20:50:57.772137 | orchestrator | Saturday 31 May 2025 20:50:57 +0000 (0:00:00.185) 0:00:07.077 ********** 2025-05-31 20:50:57.955224 | orchestrator | skipping: [testbed-node-3] 2025-05-31 20:50:57.956734 | orchestrator | 2025-05-31 20:50:57.959045 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 20:50:57.960114 | orchestrator | Saturday 31 May 2025 20:50:57 +0000 (0:00:00.185) 0:00:07.262 ********** 2025-05-31 20:50:58.144198 | orchestrator | skipping: [testbed-node-3] 2025-05-31 20:50:58.144296 | orchestrator | 2025-05-31 20:50:58.144957 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 20:50:58.145669 | orchestrator | Saturday 31 May 2025 20:50:58 +0000 (0:00:00.189) 0:00:07.452 ********** 2025-05-31 20:50:58.322911 | orchestrator | skipping: [testbed-node-3] 2025-05-31 20:50:58.323658 | orchestrator | 2025-05-31 20:50:58.324733 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 20:50:58.325810 | orchestrator | Saturday 31 May 2025 20:50:58 +0000 (0:00:00.179) 0:00:07.631 ********** 2025-05-31 20:50:59.316600 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-05-31 20:50:59.316935 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-05-31 20:50:59.317877 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-05-31 20:50:59.318795 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-05-31 20:50:59.320039 | orchestrator | 2025-05-31 20:50:59.320960 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 20:50:59.321787 | orchestrator | Saturday 31 May 2025 20:50:59 +0000 (0:00:00.992) 0:00:08.623 ********** 2025-05-31 20:50:59.517567 | orchestrator | skipping: [testbed-node-3] 2025-05-31 20:50:59.518520 | orchestrator | 2025-05-31 20:50:59.519195 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 20:50:59.520158 | orchestrator | Saturday 31 May 2025 20:50:59 +0000 (0:00:00.201) 0:00:08.825 ********** 2025-05-31 20:50:59.707172 | orchestrator | skipping: [testbed-node-3] 2025-05-31 20:50:59.707542 | orchestrator | 2025-05-31 20:50:59.707968 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 20:50:59.708289 | orchestrator | Saturday 31 May 2025 20:50:59 +0000 (0:00:00.190) 0:00:09.015 ********** 2025-05-31 20:50:59.902225 | orchestrator | skipping: [testbed-node-3] 2025-05-31 20:50:59.902420 | orchestrator | 2025-05-31 20:50:59.903187 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 20:50:59.904126 | orchestrator | Saturday 31 May 2025 20:50:59 +0000 (0:00:00.194) 0:00:09.210 ********** 2025-05-31 20:51:00.091956 | orchestrator | skipping: [testbed-node-3] 2025-05-31 20:51:00.092572 | orchestrator | 2025-05-31 20:51:00.093216 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-05-31 20:51:00.094168 | orchestrator | Saturday 31 May 2025 20:51:00 +0000 (0:00:00.190) 0:00:09.400 ********** 2025-05-31 20:51:00.217195 | orchestrator | skipping: [testbed-node-3] 2025-05-31 20:51:00.217378 | orchestrator | 2025-05-31 20:51:00.217778 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-05-31 20:51:00.218333 | orchestrator | Saturday 31 May 2025 20:51:00 +0000 (0:00:00.125) 0:00:09.525 ********** 2025-05-31 20:51:00.398212 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '813d0644-8ada-5e52-b3d8-7484365c4567'}}) 2025-05-31 20:51:00.398429 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'b37e5891-99ec-5ce8-8fa7-674876c21edd'}}) 2025-05-31 20:51:00.398889 | orchestrator | 2025-05-31 20:51:00.399481 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-05-31 20:51:00.399505 | orchestrator | Saturday 31 May 2025 20:51:00 +0000 (0:00:00.181) 0:00:09.707 ********** 2025-05-31 20:51:02.594878 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-813d0644-8ada-5e52-b3d8-7484365c4567', 'data_vg': 'ceph-813d0644-8ada-5e52-b3d8-7484365c4567'}) 2025-05-31 20:51:02.595142 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-b37e5891-99ec-5ce8-8fa7-674876c21edd', 'data_vg': 'ceph-b37e5891-99ec-5ce8-8fa7-674876c21edd'}) 2025-05-31 20:51:02.596154 | orchestrator | 2025-05-31 20:51:02.597600 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-05-31 20:51:02.598152 | orchestrator | Saturday 31 May 2025 20:51:02 +0000 (0:00:02.194) 0:00:11.901 ********** 2025-05-31 20:51:02.740824 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-813d0644-8ada-5e52-b3d8-7484365c4567', 'data_vg': 'ceph-813d0644-8ada-5e52-b3d8-7484365c4567'})  2025-05-31 20:51:02.742471 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b37e5891-99ec-5ce8-8fa7-674876c21edd', 'data_vg': 'ceph-b37e5891-99ec-5ce8-8fa7-674876c21edd'})  2025-05-31 20:51:02.743132 | orchestrator | skipping: [testbed-node-3] 2025-05-31 20:51:02.743763 | orchestrator | 2025-05-31 20:51:02.744660 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-05-31 20:51:02.745517 | orchestrator | Saturday 31 May 2025 20:51:02 +0000 (0:00:00.147) 0:00:12.049 ********** 2025-05-31 20:51:04.107925 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-813d0644-8ada-5e52-b3d8-7484365c4567', 'data_vg': 'ceph-813d0644-8ada-5e52-b3d8-7484365c4567'}) 2025-05-31 20:51:04.108033 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-b37e5891-99ec-5ce8-8fa7-674876c21edd', 'data_vg': 'ceph-b37e5891-99ec-5ce8-8fa7-674876c21edd'}) 2025-05-31 20:51:04.109872 | orchestrator | 2025-05-31 20:51:04.110911 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-05-31 20:51:04.111816 | orchestrator | Saturday 31 May 2025 20:51:04 +0000 (0:00:01.365) 0:00:13.414 ********** 2025-05-31 20:51:04.248706 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-813d0644-8ada-5e52-b3d8-7484365c4567', 'data_vg': 'ceph-813d0644-8ada-5e52-b3d8-7484365c4567'})  2025-05-31 20:51:04.249537 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b37e5891-99ec-5ce8-8fa7-674876c21edd', 'data_vg': 'ceph-b37e5891-99ec-5ce8-8fa7-674876c21edd'})  2025-05-31 20:51:04.250346 | orchestrator | skipping: [testbed-node-3] 2025-05-31 20:51:04.250936 | orchestrator | 2025-05-31 20:51:04.252261 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-05-31 20:51:04.253126 | orchestrator | Saturday 31 May 2025 20:51:04 +0000 (0:00:00.142) 0:00:13.557 ********** 2025-05-31 20:51:04.381932 | orchestrator | skipping: [testbed-node-3] 2025-05-31 20:51:04.382542 | orchestrator | 2025-05-31 20:51:04.383305 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-05-31 20:51:04.383932 | orchestrator | Saturday 31 May 2025 20:51:04 +0000 (0:00:00.132) 0:00:13.689 ********** 2025-05-31 20:51:04.714312 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-813d0644-8ada-5e52-b3d8-7484365c4567', 'data_vg': 'ceph-813d0644-8ada-5e52-b3d8-7484365c4567'})  2025-05-31 20:51:04.715866 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b37e5891-99ec-5ce8-8fa7-674876c21edd', 'data_vg': 'ceph-b37e5891-99ec-5ce8-8fa7-674876c21edd'})  2025-05-31 20:51:04.716016 | orchestrator | skipping: [testbed-node-3] 2025-05-31 20:51:04.716733 | orchestrator | 2025-05-31 20:51:04.717377 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-05-31 20:51:04.718167 | orchestrator | Saturday 31 May 2025 20:51:04 +0000 (0:00:00.332) 0:00:14.022 ********** 2025-05-31 20:51:04.854877 | orchestrator | skipping: [testbed-node-3] 2025-05-31 20:51:04.855452 | orchestrator | 2025-05-31 20:51:04.856233 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-05-31 20:51:04.857409 | orchestrator | Saturday 31 May 2025 20:51:04 +0000 (0:00:00.141) 0:00:14.164 ********** 2025-05-31 20:51:04.997856 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-813d0644-8ada-5e52-b3d8-7484365c4567', 'data_vg': 'ceph-813d0644-8ada-5e52-b3d8-7484365c4567'})  2025-05-31 20:51:04.997962 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b37e5891-99ec-5ce8-8fa7-674876c21edd', 'data_vg': 'ceph-b37e5891-99ec-5ce8-8fa7-674876c21edd'})  2025-05-31 20:51:04.998251 | orchestrator | skipping: [testbed-node-3] 2025-05-31 20:51:04.998912 | orchestrator | 2025-05-31 20:51:04.999335 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-05-31 20:51:05.000506 | orchestrator | Saturday 31 May 2025 20:51:04 +0000 (0:00:00.142) 0:00:14.306 ********** 2025-05-31 20:51:05.142330 | orchestrator | skipping: [testbed-node-3] 2025-05-31 20:51:05.143180 | orchestrator | 2025-05-31 20:51:05.143558 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-05-31 20:51:05.144448 | orchestrator | Saturday 31 May 2025 20:51:05 +0000 (0:00:00.144) 0:00:14.451 ********** 2025-05-31 20:51:05.281133 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-813d0644-8ada-5e52-b3d8-7484365c4567', 'data_vg': 'ceph-813d0644-8ada-5e52-b3d8-7484365c4567'})  2025-05-31 20:51:05.281394 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b37e5891-99ec-5ce8-8fa7-674876c21edd', 'data_vg': 'ceph-b37e5891-99ec-5ce8-8fa7-674876c21edd'})  2025-05-31 20:51:05.282679 | orchestrator | skipping: [testbed-node-3] 2025-05-31 20:51:05.283651 | orchestrator | 2025-05-31 20:51:05.284848 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-05-31 20:51:05.285941 | orchestrator | Saturday 31 May 2025 20:51:05 +0000 (0:00:00.138) 0:00:14.589 ********** 2025-05-31 20:51:05.425689 | orchestrator | ok: [testbed-node-3] 2025-05-31 20:51:05.426357 | orchestrator | 2025-05-31 20:51:05.427155 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-05-31 20:51:05.428026 | orchestrator | Saturday 31 May 2025 20:51:05 +0000 (0:00:00.144) 0:00:14.734 ********** 2025-05-31 20:51:05.579985 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-813d0644-8ada-5e52-b3d8-7484365c4567', 'data_vg': 'ceph-813d0644-8ada-5e52-b3d8-7484365c4567'})  2025-05-31 20:51:05.580737 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b37e5891-99ec-5ce8-8fa7-674876c21edd', 'data_vg': 'ceph-b37e5891-99ec-5ce8-8fa7-674876c21edd'})  2025-05-31 20:51:05.581762 | orchestrator | skipping: [testbed-node-3] 2025-05-31 20:51:05.582527 | orchestrator | 2025-05-31 20:51:05.584124 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-05-31 20:51:05.584151 | orchestrator | Saturday 31 May 2025 20:51:05 +0000 (0:00:00.153) 0:00:14.888 ********** 2025-05-31 20:51:05.732675 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-813d0644-8ada-5e52-b3d8-7484365c4567', 'data_vg': 'ceph-813d0644-8ada-5e52-b3d8-7484365c4567'})  2025-05-31 20:51:05.732854 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b37e5891-99ec-5ce8-8fa7-674876c21edd', 'data_vg': 'ceph-b37e5891-99ec-5ce8-8fa7-674876c21edd'})  2025-05-31 20:51:05.733510 | orchestrator | skipping: [testbed-node-3] 2025-05-31 20:51:05.733930 | orchestrator | 2025-05-31 20:51:05.734946 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-05-31 20:51:05.735456 | orchestrator | Saturday 31 May 2025 20:51:05 +0000 (0:00:00.150) 0:00:15.039 ********** 2025-05-31 20:51:05.875474 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-813d0644-8ada-5e52-b3d8-7484365c4567', 'data_vg': 'ceph-813d0644-8ada-5e52-b3d8-7484365c4567'})  2025-05-31 20:51:05.875685 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b37e5891-99ec-5ce8-8fa7-674876c21edd', 'data_vg': 'ceph-b37e5891-99ec-5ce8-8fa7-674876c21edd'})  2025-05-31 20:51:05.876140 | orchestrator | skipping: [testbed-node-3] 2025-05-31 20:51:05.877072 | orchestrator | 2025-05-31 20:51:05.877984 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-05-31 20:51:05.878626 | orchestrator | Saturday 31 May 2025 20:51:05 +0000 (0:00:00.144) 0:00:15.184 ********** 2025-05-31 20:51:06.011135 | orchestrator | skipping: [testbed-node-3] 2025-05-31 20:51:06.011705 | orchestrator | 2025-05-31 20:51:06.012260 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-05-31 20:51:06.012795 | orchestrator | Saturday 31 May 2025 20:51:06 +0000 (0:00:00.133) 0:00:15.317 ********** 2025-05-31 20:51:06.143772 | orchestrator | skipping: [testbed-node-3] 2025-05-31 20:51:06.143871 | orchestrator | 2025-05-31 20:51:06.144465 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-05-31 20:51:06.144657 | orchestrator | Saturday 31 May 2025 20:51:06 +0000 (0:00:00.134) 0:00:15.452 ********** 2025-05-31 20:51:06.288513 | orchestrator | skipping: [testbed-node-3] 2025-05-31 20:51:06.288991 | orchestrator | 2025-05-31 20:51:06.289711 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-05-31 20:51:06.290676 | orchestrator | Saturday 31 May 2025 20:51:06 +0000 (0:00:00.144) 0:00:15.597 ********** 2025-05-31 20:51:06.605334 | orchestrator | ok: [testbed-node-3] => { 2025-05-31 20:51:06.605594 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-05-31 20:51:06.606959 | orchestrator | } 2025-05-31 20:51:06.608002 | orchestrator | 2025-05-31 20:51:06.608666 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-05-31 20:51:06.609327 | orchestrator | Saturday 31 May 2025 20:51:06 +0000 (0:00:00.315) 0:00:15.912 ********** 2025-05-31 20:51:06.745774 | orchestrator | ok: [testbed-node-3] => { 2025-05-31 20:51:06.746476 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-05-31 20:51:06.747929 | orchestrator | } 2025-05-31 20:51:06.749308 | orchestrator | 2025-05-31 20:51:06.750185 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-05-31 20:51:06.750919 | orchestrator | Saturday 31 May 2025 20:51:06 +0000 (0:00:00.140) 0:00:16.052 ********** 2025-05-31 20:51:06.903592 | orchestrator | ok: [testbed-node-3] => { 2025-05-31 20:51:06.904419 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-05-31 20:51:06.905533 | orchestrator | } 2025-05-31 20:51:06.906986 | orchestrator | 2025-05-31 20:51:06.907358 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-05-31 20:51:06.908552 | orchestrator | Saturday 31 May 2025 20:51:06 +0000 (0:00:00.159) 0:00:16.212 ********** 2025-05-31 20:51:07.553203 | orchestrator | ok: [testbed-node-3] 2025-05-31 20:51:07.553537 | orchestrator | 2025-05-31 20:51:07.554475 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-05-31 20:51:07.554961 | orchestrator | Saturday 31 May 2025 20:51:07 +0000 (0:00:00.649) 0:00:16.861 ********** 2025-05-31 20:51:08.045360 | orchestrator | ok: [testbed-node-3] 2025-05-31 20:51:08.045870 | orchestrator | 2025-05-31 20:51:08.048571 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-05-31 20:51:08.049449 | orchestrator | Saturday 31 May 2025 20:51:08 +0000 (0:00:00.489) 0:00:17.351 ********** 2025-05-31 20:51:08.535394 | orchestrator | ok: [testbed-node-3] 2025-05-31 20:51:08.535564 | orchestrator | 2025-05-31 20:51:08.535972 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-05-31 20:51:08.536642 | orchestrator | Saturday 31 May 2025 20:51:08 +0000 (0:00:00.491) 0:00:17.843 ********** 2025-05-31 20:51:08.675222 | orchestrator | ok: [testbed-node-3] 2025-05-31 20:51:08.675523 | orchestrator | 2025-05-31 20:51:08.676216 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-05-31 20:51:08.676548 | orchestrator | Saturday 31 May 2025 20:51:08 +0000 (0:00:00.140) 0:00:17.984 ********** 2025-05-31 20:51:08.778684 | orchestrator | skipping: [testbed-node-3] 2025-05-31 20:51:08.779139 | orchestrator | 2025-05-31 20:51:08.779909 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-05-31 20:51:08.780824 | orchestrator | Saturday 31 May 2025 20:51:08 +0000 (0:00:00.102) 0:00:18.087 ********** 2025-05-31 20:51:08.894605 | orchestrator | skipping: [testbed-node-3] 2025-05-31 20:51:08.895440 | orchestrator | 2025-05-31 20:51:08.896261 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-05-31 20:51:08.898239 | orchestrator | Saturday 31 May 2025 20:51:08 +0000 (0:00:00.115) 0:00:18.203 ********** 2025-05-31 20:51:09.028382 | orchestrator | ok: [testbed-node-3] => { 2025-05-31 20:51:09.029451 | orchestrator |  "vgs_report": { 2025-05-31 20:51:09.030535 | orchestrator |  "vg": [] 2025-05-31 20:51:09.031521 | orchestrator |  } 2025-05-31 20:51:09.033279 | orchestrator | } 2025-05-31 20:51:09.033589 | orchestrator | 2025-05-31 20:51:09.034386 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-05-31 20:51:09.035641 | orchestrator | Saturday 31 May 2025 20:51:09 +0000 (0:00:00.133) 0:00:18.336 ********** 2025-05-31 20:51:09.151241 | orchestrator | skipping: [testbed-node-3] 2025-05-31 20:51:09.151915 | orchestrator | 2025-05-31 20:51:09.153100 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-05-31 20:51:09.155018 | orchestrator | Saturday 31 May 2025 20:51:09 +0000 (0:00:00.122) 0:00:18.459 ********** 2025-05-31 20:51:09.271830 | orchestrator | skipping: [testbed-node-3] 2025-05-31 20:51:09.272839 | orchestrator | 2025-05-31 20:51:09.273773 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-05-31 20:51:09.275616 | orchestrator | Saturday 31 May 2025 20:51:09 +0000 (0:00:00.120) 0:00:18.579 ********** 2025-05-31 20:51:09.597081 | orchestrator | skipping: [testbed-node-3] 2025-05-31 20:51:09.597439 | orchestrator | 2025-05-31 20:51:09.598393 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-05-31 20:51:09.599380 | orchestrator | Saturday 31 May 2025 20:51:09 +0000 (0:00:00.326) 0:00:18.906 ********** 2025-05-31 20:51:09.728428 | orchestrator | skipping: [testbed-node-3] 2025-05-31 20:51:09.729555 | orchestrator | 2025-05-31 20:51:09.730140 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-05-31 20:51:09.731391 | orchestrator | Saturday 31 May 2025 20:51:09 +0000 (0:00:00.131) 0:00:19.037 ********** 2025-05-31 20:51:09.869490 | orchestrator | skipping: [testbed-node-3] 2025-05-31 20:51:09.869804 | orchestrator | 2025-05-31 20:51:09.870654 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-05-31 20:51:09.872576 | orchestrator | Saturday 31 May 2025 20:51:09 +0000 (0:00:00.139) 0:00:19.176 ********** 2025-05-31 20:51:09.995190 | orchestrator | skipping: [testbed-node-3] 2025-05-31 20:51:09.996204 | orchestrator | 2025-05-31 20:51:09.996603 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-05-31 20:51:09.997452 | orchestrator | Saturday 31 May 2025 20:51:09 +0000 (0:00:00.127) 0:00:19.304 ********** 2025-05-31 20:51:10.145101 | orchestrator | skipping: [testbed-node-3] 2025-05-31 20:51:10.145658 | orchestrator | 2025-05-31 20:51:10.146467 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-05-31 20:51:10.147098 | orchestrator | Saturday 31 May 2025 20:51:10 +0000 (0:00:00.149) 0:00:19.454 ********** 2025-05-31 20:51:10.279141 | orchestrator | skipping: [testbed-node-3] 2025-05-31 20:51:10.279431 | orchestrator | 2025-05-31 20:51:10.281107 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-05-31 20:51:10.282244 | orchestrator | Saturday 31 May 2025 20:51:10 +0000 (0:00:00.132) 0:00:19.586 ********** 2025-05-31 20:51:10.412635 | orchestrator | skipping: [testbed-node-3] 2025-05-31 20:51:10.412824 | orchestrator | 2025-05-31 20:51:10.413561 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-05-31 20:51:10.414144 | orchestrator | Saturday 31 May 2025 20:51:10 +0000 (0:00:00.134) 0:00:19.721 ********** 2025-05-31 20:51:10.530306 | orchestrator | skipping: [testbed-node-3] 2025-05-31 20:51:10.530838 | orchestrator | 2025-05-31 20:51:10.531656 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-05-31 20:51:10.532402 | orchestrator | Saturday 31 May 2025 20:51:10 +0000 (0:00:00.118) 0:00:19.839 ********** 2025-05-31 20:51:10.658439 | orchestrator | skipping: [testbed-node-3] 2025-05-31 20:51:10.659201 | orchestrator | 2025-05-31 20:51:10.659835 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-05-31 20:51:10.660683 | orchestrator | Saturday 31 May 2025 20:51:10 +0000 (0:00:00.127) 0:00:19.966 ********** 2025-05-31 20:51:10.794189 | orchestrator | skipping: [testbed-node-3] 2025-05-31 20:51:10.795532 | orchestrator | 2025-05-31 20:51:10.796327 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-05-31 20:51:10.798516 | orchestrator | Saturday 31 May 2025 20:51:10 +0000 (0:00:00.135) 0:00:20.102 ********** 2025-05-31 20:51:10.934825 | orchestrator | skipping: [testbed-node-3] 2025-05-31 20:51:10.935588 | orchestrator | 2025-05-31 20:51:10.937284 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-05-31 20:51:10.938686 | orchestrator | Saturday 31 May 2025 20:51:10 +0000 (0:00:00.140) 0:00:20.243 ********** 2025-05-31 20:51:11.073154 | orchestrator | skipping: [testbed-node-3] 2025-05-31 20:51:11.073734 | orchestrator | 2025-05-31 20:51:11.074992 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-05-31 20:51:11.076072 | orchestrator | Saturday 31 May 2025 20:51:11 +0000 (0:00:00.138) 0:00:20.381 ********** 2025-05-31 20:51:11.223883 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-813d0644-8ada-5e52-b3d8-7484365c4567', 'data_vg': 'ceph-813d0644-8ada-5e52-b3d8-7484365c4567'})  2025-05-31 20:51:11.223991 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b37e5891-99ec-5ce8-8fa7-674876c21edd', 'data_vg': 'ceph-b37e5891-99ec-5ce8-8fa7-674876c21edd'})  2025-05-31 20:51:11.224120 | orchestrator | skipping: [testbed-node-3] 2025-05-31 20:51:11.224770 | orchestrator | 2025-05-31 20:51:11.225730 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-05-31 20:51:11.226262 | orchestrator | Saturday 31 May 2025 20:51:11 +0000 (0:00:00.150) 0:00:20.532 ********** 2025-05-31 20:51:11.558821 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-813d0644-8ada-5e52-b3d8-7484365c4567', 'data_vg': 'ceph-813d0644-8ada-5e52-b3d8-7484365c4567'})  2025-05-31 20:51:11.559144 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b37e5891-99ec-5ce8-8fa7-674876c21edd', 'data_vg': 'ceph-b37e5891-99ec-5ce8-8fa7-674876c21edd'})  2025-05-31 20:51:11.560418 | orchestrator | skipping: [testbed-node-3] 2025-05-31 20:51:11.561637 | orchestrator | 2025-05-31 20:51:11.562539 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-05-31 20:51:11.563409 | orchestrator | Saturday 31 May 2025 20:51:11 +0000 (0:00:00.334) 0:00:20.867 ********** 2025-05-31 20:51:11.708227 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-813d0644-8ada-5e52-b3d8-7484365c4567', 'data_vg': 'ceph-813d0644-8ada-5e52-b3d8-7484365c4567'})  2025-05-31 20:51:11.709098 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b37e5891-99ec-5ce8-8fa7-674876c21edd', 'data_vg': 'ceph-b37e5891-99ec-5ce8-8fa7-674876c21edd'})  2025-05-31 20:51:11.710009 | orchestrator | skipping: [testbed-node-3] 2025-05-31 20:51:11.711278 | orchestrator | 2025-05-31 20:51:11.712776 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-05-31 20:51:11.713385 | orchestrator | Saturday 31 May 2025 20:51:11 +0000 (0:00:00.149) 0:00:21.016 ********** 2025-05-31 20:51:11.869207 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-813d0644-8ada-5e52-b3d8-7484365c4567', 'data_vg': 'ceph-813d0644-8ada-5e52-b3d8-7484365c4567'})  2025-05-31 20:51:11.869877 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b37e5891-99ec-5ce8-8fa7-674876c21edd', 'data_vg': 'ceph-b37e5891-99ec-5ce8-8fa7-674876c21edd'})  2025-05-31 20:51:11.871034 | orchestrator | skipping: [testbed-node-3] 2025-05-31 20:51:11.872071 | orchestrator | 2025-05-31 20:51:11.873003 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-05-31 20:51:11.873719 | orchestrator | Saturday 31 May 2025 20:51:11 +0000 (0:00:00.158) 0:00:21.175 ********** 2025-05-31 20:51:12.021305 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-813d0644-8ada-5e52-b3d8-7484365c4567', 'data_vg': 'ceph-813d0644-8ada-5e52-b3d8-7484365c4567'})  2025-05-31 20:51:12.024196 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b37e5891-99ec-5ce8-8fa7-674876c21edd', 'data_vg': 'ceph-b37e5891-99ec-5ce8-8fa7-674876c21edd'})  2025-05-31 20:51:12.024927 | orchestrator | skipping: [testbed-node-3] 2025-05-31 20:51:12.025748 | orchestrator | 2025-05-31 20:51:12.026876 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-05-31 20:51:12.027952 | orchestrator | Saturday 31 May 2025 20:51:12 +0000 (0:00:00.153) 0:00:21.329 ********** 2025-05-31 20:51:12.165561 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-813d0644-8ada-5e52-b3d8-7484365c4567', 'data_vg': 'ceph-813d0644-8ada-5e52-b3d8-7484365c4567'})  2025-05-31 20:51:12.165753 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b37e5891-99ec-5ce8-8fa7-674876c21edd', 'data_vg': 'ceph-b37e5891-99ec-5ce8-8fa7-674876c21edd'})  2025-05-31 20:51:12.165775 | orchestrator | skipping: [testbed-node-3] 2025-05-31 20:51:12.166548 | orchestrator | 2025-05-31 20:51:12.167180 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-05-31 20:51:12.167822 | orchestrator | Saturday 31 May 2025 20:51:12 +0000 (0:00:00.143) 0:00:21.472 ********** 2025-05-31 20:51:12.315208 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-813d0644-8ada-5e52-b3d8-7484365c4567', 'data_vg': 'ceph-813d0644-8ada-5e52-b3d8-7484365c4567'})  2025-05-31 20:51:12.315712 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b37e5891-99ec-5ce8-8fa7-674876c21edd', 'data_vg': 'ceph-b37e5891-99ec-5ce8-8fa7-674876c21edd'})  2025-05-31 20:51:12.315808 | orchestrator | skipping: [testbed-node-3] 2025-05-31 20:51:12.316863 | orchestrator | 2025-05-31 20:51:12.317505 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-05-31 20:51:12.318143 | orchestrator | Saturday 31 May 2025 20:51:12 +0000 (0:00:00.151) 0:00:21.623 ********** 2025-05-31 20:51:12.477490 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-813d0644-8ada-5e52-b3d8-7484365c4567', 'data_vg': 'ceph-813d0644-8ada-5e52-b3d8-7484365c4567'})  2025-05-31 20:51:12.478669 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b37e5891-99ec-5ce8-8fa7-674876c21edd', 'data_vg': 'ceph-b37e5891-99ec-5ce8-8fa7-674876c21edd'})  2025-05-31 20:51:12.479563 | orchestrator | skipping: [testbed-node-3] 2025-05-31 20:51:12.480391 | orchestrator | 2025-05-31 20:51:12.481641 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-05-31 20:51:12.482149 | orchestrator | Saturday 31 May 2025 20:51:12 +0000 (0:00:00.162) 0:00:21.786 ********** 2025-05-31 20:51:12.973283 | orchestrator | ok: [testbed-node-3] 2025-05-31 20:51:12.973552 | orchestrator | 2025-05-31 20:51:12.974436 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-05-31 20:51:12.974872 | orchestrator | Saturday 31 May 2025 20:51:12 +0000 (0:00:00.493) 0:00:22.280 ********** 2025-05-31 20:51:13.473185 | orchestrator | ok: [testbed-node-3] 2025-05-31 20:51:13.474090 | orchestrator | 2025-05-31 20:51:13.474776 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-05-31 20:51:13.475552 | orchestrator | Saturday 31 May 2025 20:51:13 +0000 (0:00:00.501) 0:00:22.781 ********** 2025-05-31 20:51:13.621731 | orchestrator | ok: [testbed-node-3] 2025-05-31 20:51:13.621875 | orchestrator | 2025-05-31 20:51:13.621983 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-05-31 20:51:13.622688 | orchestrator | Saturday 31 May 2025 20:51:13 +0000 (0:00:00.147) 0:00:22.929 ********** 2025-05-31 20:51:13.787662 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-813d0644-8ada-5e52-b3d8-7484365c4567', 'vg_name': 'ceph-813d0644-8ada-5e52-b3d8-7484365c4567'}) 2025-05-31 20:51:13.788391 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-b37e5891-99ec-5ce8-8fa7-674876c21edd', 'vg_name': 'ceph-b37e5891-99ec-5ce8-8fa7-674876c21edd'}) 2025-05-31 20:51:13.789352 | orchestrator | 2025-05-31 20:51:13.789862 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-05-31 20:51:13.790867 | orchestrator | Saturday 31 May 2025 20:51:13 +0000 (0:00:00.166) 0:00:23.096 ********** 2025-05-31 20:51:13.931994 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-813d0644-8ada-5e52-b3d8-7484365c4567', 'data_vg': 'ceph-813d0644-8ada-5e52-b3d8-7484365c4567'})  2025-05-31 20:51:13.932825 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b37e5891-99ec-5ce8-8fa7-674876c21edd', 'data_vg': 'ceph-b37e5891-99ec-5ce8-8fa7-674876c21edd'})  2025-05-31 20:51:13.935707 | orchestrator | skipping: [testbed-node-3] 2025-05-31 20:51:13.935770 | orchestrator | 2025-05-31 20:51:13.935783 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-05-31 20:51:13.935841 | orchestrator | Saturday 31 May 2025 20:51:13 +0000 (0:00:00.142) 0:00:23.239 ********** 2025-05-31 20:51:14.280872 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-813d0644-8ada-5e52-b3d8-7484365c4567', 'data_vg': 'ceph-813d0644-8ada-5e52-b3d8-7484365c4567'})  2025-05-31 20:51:14.281256 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b37e5891-99ec-5ce8-8fa7-674876c21edd', 'data_vg': 'ceph-b37e5891-99ec-5ce8-8fa7-674876c21edd'})  2025-05-31 20:51:14.281728 | orchestrator | skipping: [testbed-node-3] 2025-05-31 20:51:14.282566 | orchestrator | 2025-05-31 20:51:14.283365 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-05-31 20:51:14.284668 | orchestrator | Saturday 31 May 2025 20:51:14 +0000 (0:00:00.349) 0:00:23.588 ********** 2025-05-31 20:51:14.436449 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-813d0644-8ada-5e52-b3d8-7484365c4567', 'data_vg': 'ceph-813d0644-8ada-5e52-b3d8-7484365c4567'})  2025-05-31 20:51:14.437157 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b37e5891-99ec-5ce8-8fa7-674876c21edd', 'data_vg': 'ceph-b37e5891-99ec-5ce8-8fa7-674876c21edd'})  2025-05-31 20:51:14.438239 | orchestrator | skipping: [testbed-node-3] 2025-05-31 20:51:14.439095 | orchestrator | 2025-05-31 20:51:14.440292 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-05-31 20:51:14.441166 | orchestrator | Saturday 31 May 2025 20:51:14 +0000 (0:00:00.155) 0:00:23.744 ********** 2025-05-31 20:51:14.714680 | orchestrator | ok: [testbed-node-3] => { 2025-05-31 20:51:14.716544 | orchestrator |  "lvm_report": { 2025-05-31 20:51:14.717176 | orchestrator |  "lv": [ 2025-05-31 20:51:14.718641 | orchestrator |  { 2025-05-31 20:51:14.719267 | orchestrator |  "lv_name": "osd-block-813d0644-8ada-5e52-b3d8-7484365c4567", 2025-05-31 20:51:14.720169 | orchestrator |  "vg_name": "ceph-813d0644-8ada-5e52-b3d8-7484365c4567" 2025-05-31 20:51:14.720819 | orchestrator |  }, 2025-05-31 20:51:14.721719 | orchestrator |  { 2025-05-31 20:51:14.722618 | orchestrator |  "lv_name": "osd-block-b37e5891-99ec-5ce8-8fa7-674876c21edd", 2025-05-31 20:51:14.723352 | orchestrator |  "vg_name": "ceph-b37e5891-99ec-5ce8-8fa7-674876c21edd" 2025-05-31 20:51:14.723738 | orchestrator |  } 2025-05-31 20:51:14.724434 | orchestrator |  ], 2025-05-31 20:51:14.724833 | orchestrator |  "pv": [ 2025-05-31 20:51:14.726586 | orchestrator |  { 2025-05-31 20:51:14.726613 | orchestrator |  "pv_name": "/dev/sdb", 2025-05-31 20:51:14.726625 | orchestrator |  "vg_name": "ceph-813d0644-8ada-5e52-b3d8-7484365c4567" 2025-05-31 20:51:14.726636 | orchestrator |  }, 2025-05-31 20:51:14.726913 | orchestrator |  { 2025-05-31 20:51:14.727584 | orchestrator |  "pv_name": "/dev/sdc", 2025-05-31 20:51:14.727814 | orchestrator |  "vg_name": "ceph-b37e5891-99ec-5ce8-8fa7-674876c21edd" 2025-05-31 20:51:14.728265 | orchestrator |  } 2025-05-31 20:51:14.728646 | orchestrator |  ] 2025-05-31 20:51:14.729128 | orchestrator |  } 2025-05-31 20:51:14.729528 | orchestrator | } 2025-05-31 20:51:14.729919 | orchestrator | 2025-05-31 20:51:14.730213 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-05-31 20:51:14.730622 | orchestrator | 2025-05-31 20:51:14.731014 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-31 20:51:14.731335 | orchestrator | Saturday 31 May 2025 20:51:14 +0000 (0:00:00.278) 0:00:24.023 ********** 2025-05-31 20:51:14.954932 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-05-31 20:51:14.955031 | orchestrator | 2025-05-31 20:51:14.956274 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-31 20:51:14.956775 | orchestrator | Saturday 31 May 2025 20:51:14 +0000 (0:00:00.238) 0:00:24.261 ********** 2025-05-31 20:51:15.191742 | orchestrator | ok: [testbed-node-4] 2025-05-31 20:51:15.191960 | orchestrator | 2025-05-31 20:51:15.192805 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 20:51:15.193882 | orchestrator | Saturday 31 May 2025 20:51:15 +0000 (0:00:00.239) 0:00:24.500 ********** 2025-05-31 20:51:15.599183 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-05-31 20:51:15.599416 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-05-31 20:51:15.600470 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-05-31 20:51:15.600832 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-05-31 20:51:15.601679 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-05-31 20:51:15.602173 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-05-31 20:51:15.602899 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-05-31 20:51:15.605428 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-05-31 20:51:15.605449 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-05-31 20:51:15.605862 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-05-31 20:51:15.606309 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-05-31 20:51:15.606801 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-05-31 20:51:15.607813 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-05-31 20:51:15.607991 | orchestrator | 2025-05-31 20:51:15.608255 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 20:51:15.609036 | orchestrator | Saturday 31 May 2025 20:51:15 +0000 (0:00:00.407) 0:00:24.908 ********** 2025-05-31 20:51:15.794498 | orchestrator | skipping: [testbed-node-4] 2025-05-31 20:51:15.794599 | orchestrator | 2025-05-31 20:51:15.794698 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 20:51:15.797654 | orchestrator | Saturday 31 May 2025 20:51:15 +0000 (0:00:00.194) 0:00:25.102 ********** 2025-05-31 20:51:15.974452 | orchestrator | skipping: [testbed-node-4] 2025-05-31 20:51:15.974678 | orchestrator | 2025-05-31 20:51:15.975248 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 20:51:15.975679 | orchestrator | Saturday 31 May 2025 20:51:15 +0000 (0:00:00.178) 0:00:25.280 ********** 2025-05-31 20:51:16.144227 | orchestrator | skipping: [testbed-node-4] 2025-05-31 20:51:16.144344 | orchestrator | 2025-05-31 20:51:16.144437 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 20:51:16.144933 | orchestrator | Saturday 31 May 2025 20:51:16 +0000 (0:00:00.171) 0:00:25.452 ********** 2025-05-31 20:51:16.709648 | orchestrator | skipping: [testbed-node-4] 2025-05-31 20:51:16.709736 | orchestrator | 2025-05-31 20:51:16.710185 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 20:51:16.710759 | orchestrator | Saturday 31 May 2025 20:51:16 +0000 (0:00:00.564) 0:00:26.017 ********** 2025-05-31 20:51:16.902684 | orchestrator | skipping: [testbed-node-4] 2025-05-31 20:51:16.903479 | orchestrator | 2025-05-31 20:51:16.904357 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 20:51:16.905101 | orchestrator | Saturday 31 May 2025 20:51:16 +0000 (0:00:00.193) 0:00:26.211 ********** 2025-05-31 20:51:17.093114 | orchestrator | skipping: [testbed-node-4] 2025-05-31 20:51:17.093781 | orchestrator | 2025-05-31 20:51:17.094627 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 20:51:17.095637 | orchestrator | Saturday 31 May 2025 20:51:17 +0000 (0:00:00.190) 0:00:26.401 ********** 2025-05-31 20:51:17.289010 | orchestrator | skipping: [testbed-node-4] 2025-05-31 20:51:17.289542 | orchestrator | 2025-05-31 20:51:17.290248 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 20:51:17.291657 | orchestrator | Saturday 31 May 2025 20:51:17 +0000 (0:00:00.194) 0:00:26.595 ********** 2025-05-31 20:51:17.477900 | orchestrator | skipping: [testbed-node-4] 2025-05-31 20:51:17.478194 | orchestrator | 2025-05-31 20:51:17.478860 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 20:51:17.479566 | orchestrator | Saturday 31 May 2025 20:51:17 +0000 (0:00:00.190) 0:00:26.786 ********** 2025-05-31 20:51:17.881724 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_525ad027-7e06-4e85-bfbd-c3ec419229c5) 2025-05-31 20:51:17.882145 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_525ad027-7e06-4e85-bfbd-c3ec419229c5) 2025-05-31 20:51:17.882785 | orchestrator | 2025-05-31 20:51:17.883682 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 20:51:17.885109 | orchestrator | Saturday 31 May 2025 20:51:17 +0000 (0:00:00.403) 0:00:27.190 ********** 2025-05-31 20:51:18.304983 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_a9241271-625e-4229-94b1-3d99bba363ae) 2025-05-31 20:51:18.307644 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_a9241271-625e-4229-94b1-3d99bba363ae) 2025-05-31 20:51:18.307695 | orchestrator | 2025-05-31 20:51:18.308252 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 20:51:18.309379 | orchestrator | Saturday 31 May 2025 20:51:18 +0000 (0:00:00.421) 0:00:27.611 ********** 2025-05-31 20:51:18.714472 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_1a9ee9a4-914c-40fd-b835-c38474fb60e8) 2025-05-31 20:51:18.714680 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_1a9ee9a4-914c-40fd-b835-c38474fb60e8) 2025-05-31 20:51:18.715559 | orchestrator | 2025-05-31 20:51:18.716249 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 20:51:18.717870 | orchestrator | Saturday 31 May 2025 20:51:18 +0000 (0:00:00.410) 0:00:28.022 ********** 2025-05-31 20:51:19.112689 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_9b14a296-0b0f-456e-ac69-f453c0a27a39) 2025-05-31 20:51:19.113571 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_9b14a296-0b0f-456e-ac69-f453c0a27a39) 2025-05-31 20:51:19.114144 | orchestrator | 2025-05-31 20:51:19.114862 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 20:51:19.115802 | orchestrator | Saturday 31 May 2025 20:51:19 +0000 (0:00:00.397) 0:00:28.420 ********** 2025-05-31 20:51:19.427796 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-31 20:51:19.427929 | orchestrator | 2025-05-31 20:51:19.428795 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 20:51:19.429400 | orchestrator | Saturday 31 May 2025 20:51:19 +0000 (0:00:00.316) 0:00:28.736 ********** 2025-05-31 20:51:20.010262 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-05-31 20:51:20.010965 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-05-31 20:51:20.014174 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-05-31 20:51:20.014201 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-05-31 20:51:20.014633 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-05-31 20:51:20.015636 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-05-31 20:51:20.016609 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-05-31 20:51:20.017725 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-05-31 20:51:20.019024 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-05-31 20:51:20.020318 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-05-31 20:51:20.021154 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-05-31 20:51:20.021406 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-05-31 20:51:20.021996 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-05-31 20:51:20.022366 | orchestrator | 2025-05-31 20:51:20.023385 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 20:51:20.023707 | orchestrator | Saturday 31 May 2025 20:51:19 +0000 (0:00:00.579) 0:00:29.316 ********** 2025-05-31 20:51:20.208544 | orchestrator | skipping: [testbed-node-4] 2025-05-31 20:51:20.208930 | orchestrator | 2025-05-31 20:51:20.210325 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 20:51:20.211235 | orchestrator | Saturday 31 May 2025 20:51:20 +0000 (0:00:00.199) 0:00:29.516 ********** 2025-05-31 20:51:20.414965 | orchestrator | skipping: [testbed-node-4] 2025-05-31 20:51:20.415625 | orchestrator | 2025-05-31 20:51:20.416715 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 20:51:20.417685 | orchestrator | Saturday 31 May 2025 20:51:20 +0000 (0:00:00.207) 0:00:29.723 ********** 2025-05-31 20:51:20.600792 | orchestrator | skipping: [testbed-node-4] 2025-05-31 20:51:20.601588 | orchestrator | 2025-05-31 20:51:20.602859 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 20:51:20.603642 | orchestrator | Saturday 31 May 2025 20:51:20 +0000 (0:00:00.186) 0:00:29.909 ********** 2025-05-31 20:51:20.789430 | orchestrator | skipping: [testbed-node-4] 2025-05-31 20:51:20.789908 | orchestrator | 2025-05-31 20:51:20.790880 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 20:51:20.791634 | orchestrator | Saturday 31 May 2025 20:51:20 +0000 (0:00:00.187) 0:00:30.097 ********** 2025-05-31 20:51:20.993130 | orchestrator | skipping: [testbed-node-4] 2025-05-31 20:51:20.993980 | orchestrator | 2025-05-31 20:51:20.996975 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 20:51:20.998230 | orchestrator | Saturday 31 May 2025 20:51:20 +0000 (0:00:00.204) 0:00:30.301 ********** 2025-05-31 20:51:21.205656 | orchestrator | skipping: [testbed-node-4] 2025-05-31 20:51:21.208440 | orchestrator | 2025-05-31 20:51:21.208534 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 20:51:21.209419 | orchestrator | Saturday 31 May 2025 20:51:21 +0000 (0:00:00.212) 0:00:30.514 ********** 2025-05-31 20:51:21.396115 | orchestrator | skipping: [testbed-node-4] 2025-05-31 20:51:21.396325 | orchestrator | 2025-05-31 20:51:21.397347 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 20:51:21.398189 | orchestrator | Saturday 31 May 2025 20:51:21 +0000 (0:00:00.190) 0:00:30.704 ********** 2025-05-31 20:51:21.587572 | orchestrator | skipping: [testbed-node-4] 2025-05-31 20:51:21.587783 | orchestrator | 2025-05-31 20:51:21.588500 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 20:51:21.589116 | orchestrator | Saturday 31 May 2025 20:51:21 +0000 (0:00:00.191) 0:00:30.895 ********** 2025-05-31 20:51:22.384290 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-05-31 20:51:22.384741 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-05-31 20:51:22.385750 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-05-31 20:51:22.386579 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-05-31 20:51:22.388472 | orchestrator | 2025-05-31 20:51:22.388730 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 20:51:22.389145 | orchestrator | Saturday 31 May 2025 20:51:22 +0000 (0:00:00.795) 0:00:31.691 ********** 2025-05-31 20:51:22.596336 | orchestrator | skipping: [testbed-node-4] 2025-05-31 20:51:22.596464 | orchestrator | 2025-05-31 20:51:22.598891 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 20:51:22.598918 | orchestrator | Saturday 31 May 2025 20:51:22 +0000 (0:00:00.209) 0:00:31.901 ********** 2025-05-31 20:51:22.801950 | orchestrator | skipping: [testbed-node-4] 2025-05-31 20:51:22.802688 | orchestrator | 2025-05-31 20:51:22.804125 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 20:51:22.804583 | orchestrator | Saturday 31 May 2025 20:51:22 +0000 (0:00:00.208) 0:00:32.110 ********** 2025-05-31 20:51:23.447259 | orchestrator | skipping: [testbed-node-4] 2025-05-31 20:51:23.448426 | orchestrator | 2025-05-31 20:51:23.450451 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 20:51:23.450512 | orchestrator | Saturday 31 May 2025 20:51:23 +0000 (0:00:00.644) 0:00:32.754 ********** 2025-05-31 20:51:23.667246 | orchestrator | skipping: [testbed-node-4] 2025-05-31 20:51:23.667493 | orchestrator | 2025-05-31 20:51:23.668541 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-05-31 20:51:23.668930 | orchestrator | Saturday 31 May 2025 20:51:23 +0000 (0:00:00.220) 0:00:32.974 ********** 2025-05-31 20:51:23.797286 | orchestrator | skipping: [testbed-node-4] 2025-05-31 20:51:23.797621 | orchestrator | 2025-05-31 20:51:23.798669 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-05-31 20:51:23.799744 | orchestrator | Saturday 31 May 2025 20:51:23 +0000 (0:00:00.130) 0:00:33.105 ********** 2025-05-31 20:51:23.977807 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '7717ad38-094f-5aa6-8c39-f28029f817d5'}}) 2025-05-31 20:51:23.978222 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '6fa9e552-f12f-547e-b45f-d034b93383af'}}) 2025-05-31 20:51:23.979038 | orchestrator | 2025-05-31 20:51:23.979437 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-05-31 20:51:23.979981 | orchestrator | Saturday 31 May 2025 20:51:23 +0000 (0:00:00.180) 0:00:33.286 ********** 2025-05-31 20:51:26.016642 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-7717ad38-094f-5aa6-8c39-f28029f817d5', 'data_vg': 'ceph-7717ad38-094f-5aa6-8c39-f28029f817d5'}) 2025-05-31 20:51:26.017426 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-6fa9e552-f12f-547e-b45f-d034b93383af', 'data_vg': 'ceph-6fa9e552-f12f-547e-b45f-d034b93383af'}) 2025-05-31 20:51:26.018766 | orchestrator | 2025-05-31 20:51:26.020752 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-05-31 20:51:26.021639 | orchestrator | Saturday 31 May 2025 20:51:26 +0000 (0:00:02.036) 0:00:35.323 ********** 2025-05-31 20:51:26.163756 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7717ad38-094f-5aa6-8c39-f28029f817d5', 'data_vg': 'ceph-7717ad38-094f-5aa6-8c39-f28029f817d5'})  2025-05-31 20:51:26.164101 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6fa9e552-f12f-547e-b45f-d034b93383af', 'data_vg': 'ceph-6fa9e552-f12f-547e-b45f-d034b93383af'})  2025-05-31 20:51:26.164811 | orchestrator | skipping: [testbed-node-4] 2025-05-31 20:51:26.165762 | orchestrator | 2025-05-31 20:51:26.166375 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-05-31 20:51:26.167468 | orchestrator | Saturday 31 May 2025 20:51:26 +0000 (0:00:00.147) 0:00:35.471 ********** 2025-05-31 20:51:27.412850 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-7717ad38-094f-5aa6-8c39-f28029f817d5', 'data_vg': 'ceph-7717ad38-094f-5aa6-8c39-f28029f817d5'}) 2025-05-31 20:51:27.413483 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-6fa9e552-f12f-547e-b45f-d034b93383af', 'data_vg': 'ceph-6fa9e552-f12f-547e-b45f-d034b93383af'}) 2025-05-31 20:51:27.414537 | orchestrator | 2025-05-31 20:51:27.415022 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-05-31 20:51:27.415756 | orchestrator | Saturday 31 May 2025 20:51:27 +0000 (0:00:01.248) 0:00:36.720 ********** 2025-05-31 20:51:27.553054 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7717ad38-094f-5aa6-8c39-f28029f817d5', 'data_vg': 'ceph-7717ad38-094f-5aa6-8c39-f28029f817d5'})  2025-05-31 20:51:27.553721 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6fa9e552-f12f-547e-b45f-d034b93383af', 'data_vg': 'ceph-6fa9e552-f12f-547e-b45f-d034b93383af'})  2025-05-31 20:51:27.554638 | orchestrator | skipping: [testbed-node-4] 2025-05-31 20:51:27.555372 | orchestrator | 2025-05-31 20:51:27.557526 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-05-31 20:51:27.557614 | orchestrator | Saturday 31 May 2025 20:51:27 +0000 (0:00:00.141) 0:00:36.861 ********** 2025-05-31 20:51:27.673567 | orchestrator | skipping: [testbed-node-4] 2025-05-31 20:51:27.673737 | orchestrator | 2025-05-31 20:51:27.674382 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-05-31 20:51:27.675158 | orchestrator | Saturday 31 May 2025 20:51:27 +0000 (0:00:00.119) 0:00:36.981 ********** 2025-05-31 20:51:27.820346 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7717ad38-094f-5aa6-8c39-f28029f817d5', 'data_vg': 'ceph-7717ad38-094f-5aa6-8c39-f28029f817d5'})  2025-05-31 20:51:27.820532 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6fa9e552-f12f-547e-b45f-d034b93383af', 'data_vg': 'ceph-6fa9e552-f12f-547e-b45f-d034b93383af'})  2025-05-31 20:51:27.821118 | orchestrator | skipping: [testbed-node-4] 2025-05-31 20:51:27.821798 | orchestrator | 2025-05-31 20:51:27.822233 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-05-31 20:51:27.823011 | orchestrator | Saturday 31 May 2025 20:51:27 +0000 (0:00:00.146) 0:00:37.127 ********** 2025-05-31 20:51:27.948474 | orchestrator | skipping: [testbed-node-4] 2025-05-31 20:51:27.949187 | orchestrator | 2025-05-31 20:51:27.949924 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-05-31 20:51:27.950721 | orchestrator | Saturday 31 May 2025 20:51:27 +0000 (0:00:00.129) 0:00:37.257 ********** 2025-05-31 20:51:28.094795 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7717ad38-094f-5aa6-8c39-f28029f817d5', 'data_vg': 'ceph-7717ad38-094f-5aa6-8c39-f28029f817d5'})  2025-05-31 20:51:28.095216 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6fa9e552-f12f-547e-b45f-d034b93383af', 'data_vg': 'ceph-6fa9e552-f12f-547e-b45f-d034b93383af'})  2025-05-31 20:51:28.096379 | orchestrator | skipping: [testbed-node-4] 2025-05-31 20:51:28.096725 | orchestrator | 2025-05-31 20:51:28.101766 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-05-31 20:51:28.101996 | orchestrator | Saturday 31 May 2025 20:51:28 +0000 (0:00:00.146) 0:00:37.403 ********** 2025-05-31 20:51:28.412199 | orchestrator | skipping: [testbed-node-4] 2025-05-31 20:51:28.413318 | orchestrator | 2025-05-31 20:51:28.414607 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-05-31 20:51:28.415554 | orchestrator | Saturday 31 May 2025 20:51:28 +0000 (0:00:00.315) 0:00:37.719 ********** 2025-05-31 20:51:28.567125 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7717ad38-094f-5aa6-8c39-f28029f817d5', 'data_vg': 'ceph-7717ad38-094f-5aa6-8c39-f28029f817d5'})  2025-05-31 20:51:28.567334 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6fa9e552-f12f-547e-b45f-d034b93383af', 'data_vg': 'ceph-6fa9e552-f12f-547e-b45f-d034b93383af'})  2025-05-31 20:51:28.568794 | orchestrator | skipping: [testbed-node-4] 2025-05-31 20:51:28.569319 | orchestrator | 2025-05-31 20:51:28.570172 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-05-31 20:51:28.570584 | orchestrator | Saturday 31 May 2025 20:51:28 +0000 (0:00:00.156) 0:00:37.875 ********** 2025-05-31 20:51:28.719406 | orchestrator | ok: [testbed-node-4] 2025-05-31 20:51:28.720136 | orchestrator | 2025-05-31 20:51:28.721137 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-05-31 20:51:28.721763 | orchestrator | Saturday 31 May 2025 20:51:28 +0000 (0:00:00.151) 0:00:38.027 ********** 2025-05-31 20:51:28.867898 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7717ad38-094f-5aa6-8c39-f28029f817d5', 'data_vg': 'ceph-7717ad38-094f-5aa6-8c39-f28029f817d5'})  2025-05-31 20:51:28.869055 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6fa9e552-f12f-547e-b45f-d034b93383af', 'data_vg': 'ceph-6fa9e552-f12f-547e-b45f-d034b93383af'})  2025-05-31 20:51:28.870235 | orchestrator | skipping: [testbed-node-4] 2025-05-31 20:51:28.870687 | orchestrator | 2025-05-31 20:51:28.871999 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-05-31 20:51:28.873267 | orchestrator | Saturday 31 May 2025 20:51:28 +0000 (0:00:00.148) 0:00:38.176 ********** 2025-05-31 20:51:29.017893 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7717ad38-094f-5aa6-8c39-f28029f817d5', 'data_vg': 'ceph-7717ad38-094f-5aa6-8c39-f28029f817d5'})  2025-05-31 20:51:29.018423 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6fa9e552-f12f-547e-b45f-d034b93383af', 'data_vg': 'ceph-6fa9e552-f12f-547e-b45f-d034b93383af'})  2025-05-31 20:51:29.019704 | orchestrator | skipping: [testbed-node-4] 2025-05-31 20:51:29.020282 | orchestrator | 2025-05-31 20:51:29.021329 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-05-31 20:51:29.022224 | orchestrator | Saturday 31 May 2025 20:51:29 +0000 (0:00:00.150) 0:00:38.326 ********** 2025-05-31 20:51:29.158256 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7717ad38-094f-5aa6-8c39-f28029f817d5', 'data_vg': 'ceph-7717ad38-094f-5aa6-8c39-f28029f817d5'})  2025-05-31 20:51:29.158451 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6fa9e552-f12f-547e-b45f-d034b93383af', 'data_vg': 'ceph-6fa9e552-f12f-547e-b45f-d034b93383af'})  2025-05-31 20:51:29.159157 | orchestrator | skipping: [testbed-node-4] 2025-05-31 20:51:29.160003 | orchestrator | 2025-05-31 20:51:29.160978 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-05-31 20:51:29.161663 | orchestrator | Saturday 31 May 2025 20:51:29 +0000 (0:00:00.139) 0:00:38.466 ********** 2025-05-31 20:51:29.294218 | orchestrator | skipping: [testbed-node-4] 2025-05-31 20:51:29.294469 | orchestrator | 2025-05-31 20:51:29.294943 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-05-31 20:51:29.295994 | orchestrator | Saturday 31 May 2025 20:51:29 +0000 (0:00:00.136) 0:00:38.603 ********** 2025-05-31 20:51:29.423042 | orchestrator | skipping: [testbed-node-4] 2025-05-31 20:51:29.423725 | orchestrator | 2025-05-31 20:51:29.425036 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-05-31 20:51:29.426263 | orchestrator | Saturday 31 May 2025 20:51:29 +0000 (0:00:00.128) 0:00:38.731 ********** 2025-05-31 20:51:29.558637 | orchestrator | skipping: [testbed-node-4] 2025-05-31 20:51:29.559545 | orchestrator | 2025-05-31 20:51:29.561224 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-05-31 20:51:29.561832 | orchestrator | Saturday 31 May 2025 20:51:29 +0000 (0:00:00.133) 0:00:38.865 ********** 2025-05-31 20:51:29.684262 | orchestrator | ok: [testbed-node-4] => { 2025-05-31 20:51:29.685327 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-05-31 20:51:29.685537 | orchestrator | } 2025-05-31 20:51:29.687457 | orchestrator | 2025-05-31 20:51:29.687624 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-05-31 20:51:29.689127 | orchestrator | Saturday 31 May 2025 20:51:29 +0000 (0:00:00.127) 0:00:38.992 ********** 2025-05-31 20:51:29.817090 | orchestrator | ok: [testbed-node-4] => { 2025-05-31 20:51:29.818006 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-05-31 20:51:29.818862 | orchestrator | } 2025-05-31 20:51:29.819561 | orchestrator | 2025-05-31 20:51:29.820351 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-05-31 20:51:29.821575 | orchestrator | Saturday 31 May 2025 20:51:29 +0000 (0:00:00.132) 0:00:39.125 ********** 2025-05-31 20:51:29.966734 | orchestrator | ok: [testbed-node-4] => { 2025-05-31 20:51:29.967298 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-05-31 20:51:29.968252 | orchestrator | } 2025-05-31 20:51:29.968682 | orchestrator | 2025-05-31 20:51:29.969724 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-05-31 20:51:29.970483 | orchestrator | Saturday 31 May 2025 20:51:29 +0000 (0:00:00.149) 0:00:39.275 ********** 2025-05-31 20:51:30.647504 | orchestrator | ok: [testbed-node-4] 2025-05-31 20:51:30.647681 | orchestrator | 2025-05-31 20:51:30.648184 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-05-31 20:51:30.648753 | orchestrator | Saturday 31 May 2025 20:51:30 +0000 (0:00:00.679) 0:00:39.954 ********** 2025-05-31 20:51:31.139004 | orchestrator | ok: [testbed-node-4] 2025-05-31 20:51:31.140015 | orchestrator | 2025-05-31 20:51:31.141232 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-05-31 20:51:31.142941 | orchestrator | Saturday 31 May 2025 20:51:31 +0000 (0:00:00.492) 0:00:40.447 ********** 2025-05-31 20:51:31.654974 | orchestrator | ok: [testbed-node-4] 2025-05-31 20:51:31.656510 | orchestrator | 2025-05-31 20:51:31.657107 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-05-31 20:51:31.658436 | orchestrator | Saturday 31 May 2025 20:51:31 +0000 (0:00:00.515) 0:00:40.962 ********** 2025-05-31 20:51:31.806974 | orchestrator | ok: [testbed-node-4] 2025-05-31 20:51:31.807776 | orchestrator | 2025-05-31 20:51:31.807806 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-05-31 20:51:31.808312 | orchestrator | Saturday 31 May 2025 20:51:31 +0000 (0:00:00.153) 0:00:41.116 ********** 2025-05-31 20:51:31.917697 | orchestrator | skipping: [testbed-node-4] 2025-05-31 20:51:31.918225 | orchestrator | 2025-05-31 20:51:31.918791 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-05-31 20:51:31.919719 | orchestrator | Saturday 31 May 2025 20:51:31 +0000 (0:00:00.110) 0:00:41.226 ********** 2025-05-31 20:51:32.021416 | orchestrator | skipping: [testbed-node-4] 2025-05-31 20:51:32.021589 | orchestrator | 2025-05-31 20:51:32.022547 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-05-31 20:51:32.023313 | orchestrator | Saturday 31 May 2025 20:51:32 +0000 (0:00:00.103) 0:00:41.330 ********** 2025-05-31 20:51:32.156028 | orchestrator | ok: [testbed-node-4] => { 2025-05-31 20:51:32.156760 | orchestrator |  "vgs_report": { 2025-05-31 20:51:32.157831 | orchestrator |  "vg": [] 2025-05-31 20:51:32.158575 | orchestrator |  } 2025-05-31 20:51:32.159435 | orchestrator | } 2025-05-31 20:51:32.160162 | orchestrator | 2025-05-31 20:51:32.160679 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-05-31 20:51:32.161058 | orchestrator | Saturday 31 May 2025 20:51:32 +0000 (0:00:00.134) 0:00:41.465 ********** 2025-05-31 20:51:32.285381 | orchestrator | skipping: [testbed-node-4] 2025-05-31 20:51:32.285739 | orchestrator | 2025-05-31 20:51:32.286319 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-05-31 20:51:32.287042 | orchestrator | Saturday 31 May 2025 20:51:32 +0000 (0:00:00.128) 0:00:41.593 ********** 2025-05-31 20:51:32.419815 | orchestrator | skipping: [testbed-node-4] 2025-05-31 20:51:32.420853 | orchestrator | 2025-05-31 20:51:32.421846 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-05-31 20:51:32.422950 | orchestrator | Saturday 31 May 2025 20:51:32 +0000 (0:00:00.135) 0:00:41.728 ********** 2025-05-31 20:51:32.558654 | orchestrator | skipping: [testbed-node-4] 2025-05-31 20:51:32.558849 | orchestrator | 2025-05-31 20:51:32.560834 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-05-31 20:51:32.561431 | orchestrator | Saturday 31 May 2025 20:51:32 +0000 (0:00:00.138) 0:00:41.867 ********** 2025-05-31 20:51:32.683444 | orchestrator | skipping: [testbed-node-4] 2025-05-31 20:51:32.684539 | orchestrator | 2025-05-31 20:51:32.685841 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-05-31 20:51:32.686731 | orchestrator | Saturday 31 May 2025 20:51:32 +0000 (0:00:00.125) 0:00:41.992 ********** 2025-05-31 20:51:32.818569 | orchestrator | skipping: [testbed-node-4] 2025-05-31 20:51:32.818783 | orchestrator | 2025-05-31 20:51:32.819746 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-05-31 20:51:32.820602 | orchestrator | Saturday 31 May 2025 20:51:32 +0000 (0:00:00.135) 0:00:42.127 ********** 2025-05-31 20:51:33.130230 | orchestrator | skipping: [testbed-node-4] 2025-05-31 20:51:33.130394 | orchestrator | 2025-05-31 20:51:33.130946 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-05-31 20:51:33.131646 | orchestrator | Saturday 31 May 2025 20:51:33 +0000 (0:00:00.311) 0:00:42.438 ********** 2025-05-31 20:51:33.269553 | orchestrator | skipping: [testbed-node-4] 2025-05-31 20:51:33.271396 | orchestrator | 2025-05-31 20:51:33.272060 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-05-31 20:51:33.273605 | orchestrator | Saturday 31 May 2025 20:51:33 +0000 (0:00:00.136) 0:00:42.575 ********** 2025-05-31 20:51:33.403139 | orchestrator | skipping: [testbed-node-4] 2025-05-31 20:51:33.403813 | orchestrator | 2025-05-31 20:51:33.405058 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-05-31 20:51:33.405757 | orchestrator | Saturday 31 May 2025 20:51:33 +0000 (0:00:00.136) 0:00:42.711 ********** 2025-05-31 20:51:33.541506 | orchestrator | skipping: [testbed-node-4] 2025-05-31 20:51:33.541937 | orchestrator | 2025-05-31 20:51:33.542671 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-05-31 20:51:33.543035 | orchestrator | Saturday 31 May 2025 20:51:33 +0000 (0:00:00.139) 0:00:42.850 ********** 2025-05-31 20:51:33.683249 | orchestrator | skipping: [testbed-node-4] 2025-05-31 20:51:33.684141 | orchestrator | 2025-05-31 20:51:33.685120 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-05-31 20:51:33.687135 | orchestrator | Saturday 31 May 2025 20:51:33 +0000 (0:00:00.141) 0:00:42.991 ********** 2025-05-31 20:51:33.813242 | orchestrator | skipping: [testbed-node-4] 2025-05-31 20:51:33.813477 | orchestrator | 2025-05-31 20:51:33.814214 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-05-31 20:51:33.814885 | orchestrator | Saturday 31 May 2025 20:51:33 +0000 (0:00:00.129) 0:00:43.121 ********** 2025-05-31 20:51:33.951831 | orchestrator | skipping: [testbed-node-4] 2025-05-31 20:51:33.952025 | orchestrator | 2025-05-31 20:51:33.952899 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-05-31 20:51:33.954913 | orchestrator | Saturday 31 May 2025 20:51:33 +0000 (0:00:00.138) 0:00:43.260 ********** 2025-05-31 20:51:34.087143 | orchestrator | skipping: [testbed-node-4] 2025-05-31 20:51:34.087843 | orchestrator | 2025-05-31 20:51:34.088637 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-05-31 20:51:34.089427 | orchestrator | Saturday 31 May 2025 20:51:34 +0000 (0:00:00.135) 0:00:43.396 ********** 2025-05-31 20:51:34.221545 | orchestrator | skipping: [testbed-node-4] 2025-05-31 20:51:34.222167 | orchestrator | 2025-05-31 20:51:34.223294 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-05-31 20:51:34.224311 | orchestrator | Saturday 31 May 2025 20:51:34 +0000 (0:00:00.134) 0:00:43.530 ********** 2025-05-31 20:51:34.377385 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7717ad38-094f-5aa6-8c39-f28029f817d5', 'data_vg': 'ceph-7717ad38-094f-5aa6-8c39-f28029f817d5'})  2025-05-31 20:51:34.377905 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6fa9e552-f12f-547e-b45f-d034b93383af', 'data_vg': 'ceph-6fa9e552-f12f-547e-b45f-d034b93383af'})  2025-05-31 20:51:34.378382 | orchestrator | skipping: [testbed-node-4] 2025-05-31 20:51:34.378941 | orchestrator | 2025-05-31 20:51:34.379466 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-05-31 20:51:34.379801 | orchestrator | Saturday 31 May 2025 20:51:34 +0000 (0:00:00.156) 0:00:43.686 ********** 2025-05-31 20:51:34.526007 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7717ad38-094f-5aa6-8c39-f28029f817d5', 'data_vg': 'ceph-7717ad38-094f-5aa6-8c39-f28029f817d5'})  2025-05-31 20:51:34.527192 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6fa9e552-f12f-547e-b45f-d034b93383af', 'data_vg': 'ceph-6fa9e552-f12f-547e-b45f-d034b93383af'})  2025-05-31 20:51:34.528038 | orchestrator | skipping: [testbed-node-4] 2025-05-31 20:51:34.529528 | orchestrator | 2025-05-31 20:51:34.529987 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-05-31 20:51:34.531528 | orchestrator | Saturday 31 May 2025 20:51:34 +0000 (0:00:00.147) 0:00:43.833 ********** 2025-05-31 20:51:34.680210 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7717ad38-094f-5aa6-8c39-f28029f817d5', 'data_vg': 'ceph-7717ad38-094f-5aa6-8c39-f28029f817d5'})  2025-05-31 20:51:34.680414 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6fa9e552-f12f-547e-b45f-d034b93383af', 'data_vg': 'ceph-6fa9e552-f12f-547e-b45f-d034b93383af'})  2025-05-31 20:51:34.680520 | orchestrator | skipping: [testbed-node-4] 2025-05-31 20:51:34.680879 | orchestrator | 2025-05-31 20:51:34.681613 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-05-31 20:51:34.681981 | orchestrator | Saturday 31 May 2025 20:51:34 +0000 (0:00:00.155) 0:00:43.989 ********** 2025-05-31 20:51:35.012875 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7717ad38-094f-5aa6-8c39-f28029f817d5', 'data_vg': 'ceph-7717ad38-094f-5aa6-8c39-f28029f817d5'})  2025-05-31 20:51:35.013135 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6fa9e552-f12f-547e-b45f-d034b93383af', 'data_vg': 'ceph-6fa9e552-f12f-547e-b45f-d034b93383af'})  2025-05-31 20:51:35.013830 | orchestrator | skipping: [testbed-node-4] 2025-05-31 20:51:35.014696 | orchestrator | 2025-05-31 20:51:35.016709 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-05-31 20:51:35.016774 | orchestrator | Saturday 31 May 2025 20:51:35 +0000 (0:00:00.331) 0:00:44.321 ********** 2025-05-31 20:51:35.167643 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7717ad38-094f-5aa6-8c39-f28029f817d5', 'data_vg': 'ceph-7717ad38-094f-5aa6-8c39-f28029f817d5'})  2025-05-31 20:51:35.167826 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6fa9e552-f12f-547e-b45f-d034b93383af', 'data_vg': 'ceph-6fa9e552-f12f-547e-b45f-d034b93383af'})  2025-05-31 20:51:35.168614 | orchestrator | skipping: [testbed-node-4] 2025-05-31 20:51:35.169716 | orchestrator | 2025-05-31 20:51:35.170168 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-05-31 20:51:35.171564 | orchestrator | Saturday 31 May 2025 20:51:35 +0000 (0:00:00.154) 0:00:44.475 ********** 2025-05-31 20:51:35.311199 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7717ad38-094f-5aa6-8c39-f28029f817d5', 'data_vg': 'ceph-7717ad38-094f-5aa6-8c39-f28029f817d5'})  2025-05-31 20:51:35.312205 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6fa9e552-f12f-547e-b45f-d034b93383af', 'data_vg': 'ceph-6fa9e552-f12f-547e-b45f-d034b93383af'})  2025-05-31 20:51:35.313985 | orchestrator | skipping: [testbed-node-4] 2025-05-31 20:51:35.315800 | orchestrator | 2025-05-31 20:51:35.316593 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-05-31 20:51:35.317511 | orchestrator | Saturday 31 May 2025 20:51:35 +0000 (0:00:00.141) 0:00:44.617 ********** 2025-05-31 20:51:35.459158 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7717ad38-094f-5aa6-8c39-f28029f817d5', 'data_vg': 'ceph-7717ad38-094f-5aa6-8c39-f28029f817d5'})  2025-05-31 20:51:35.459250 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6fa9e552-f12f-547e-b45f-d034b93383af', 'data_vg': 'ceph-6fa9e552-f12f-547e-b45f-d034b93383af'})  2025-05-31 20:51:35.460185 | orchestrator | skipping: [testbed-node-4] 2025-05-31 20:51:35.461279 | orchestrator | 2025-05-31 20:51:35.461950 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-05-31 20:51:35.462975 | orchestrator | Saturday 31 May 2025 20:51:35 +0000 (0:00:00.146) 0:00:44.764 ********** 2025-05-31 20:51:35.604377 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7717ad38-094f-5aa6-8c39-f28029f817d5', 'data_vg': 'ceph-7717ad38-094f-5aa6-8c39-f28029f817d5'})  2025-05-31 20:51:35.604617 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6fa9e552-f12f-547e-b45f-d034b93383af', 'data_vg': 'ceph-6fa9e552-f12f-547e-b45f-d034b93383af'})  2025-05-31 20:51:35.605008 | orchestrator | skipping: [testbed-node-4] 2025-05-31 20:51:35.605516 | orchestrator | 2025-05-31 20:51:35.605828 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-05-31 20:51:35.606320 | orchestrator | Saturday 31 May 2025 20:51:35 +0000 (0:00:00.148) 0:00:44.912 ********** 2025-05-31 20:51:36.108858 | orchestrator | ok: [testbed-node-4] 2025-05-31 20:51:36.109289 | orchestrator | 2025-05-31 20:51:36.110126 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-05-31 20:51:36.111558 | orchestrator | Saturday 31 May 2025 20:51:36 +0000 (0:00:00.504) 0:00:45.417 ********** 2025-05-31 20:51:36.573915 | orchestrator | ok: [testbed-node-4] 2025-05-31 20:51:36.574924 | orchestrator | 2025-05-31 20:51:36.575842 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-05-31 20:51:36.577864 | orchestrator | Saturday 31 May 2025 20:51:36 +0000 (0:00:00.464) 0:00:45.881 ********** 2025-05-31 20:51:36.728549 | orchestrator | ok: [testbed-node-4] 2025-05-31 20:51:36.728648 | orchestrator | 2025-05-31 20:51:36.729503 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-05-31 20:51:36.729727 | orchestrator | Saturday 31 May 2025 20:51:36 +0000 (0:00:00.156) 0:00:46.037 ********** 2025-05-31 20:51:36.909632 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-6fa9e552-f12f-547e-b45f-d034b93383af', 'vg_name': 'ceph-6fa9e552-f12f-547e-b45f-d034b93383af'}) 2025-05-31 20:51:36.909779 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-7717ad38-094f-5aa6-8c39-f28029f817d5', 'vg_name': 'ceph-7717ad38-094f-5aa6-8c39-f28029f817d5'}) 2025-05-31 20:51:36.909893 | orchestrator | 2025-05-31 20:51:36.910320 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-05-31 20:51:36.910865 | orchestrator | Saturday 31 May 2025 20:51:36 +0000 (0:00:00.180) 0:00:46.218 ********** 2025-05-31 20:51:37.062251 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7717ad38-094f-5aa6-8c39-f28029f817d5', 'data_vg': 'ceph-7717ad38-094f-5aa6-8c39-f28029f817d5'})  2025-05-31 20:51:37.062441 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6fa9e552-f12f-547e-b45f-d034b93383af', 'data_vg': 'ceph-6fa9e552-f12f-547e-b45f-d034b93383af'})  2025-05-31 20:51:37.062916 | orchestrator | skipping: [testbed-node-4] 2025-05-31 20:51:37.064271 | orchestrator | 2025-05-31 20:51:37.064365 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-05-31 20:51:37.064626 | orchestrator | Saturday 31 May 2025 20:51:37 +0000 (0:00:00.151) 0:00:46.370 ********** 2025-05-31 20:51:37.206347 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7717ad38-094f-5aa6-8c39-f28029f817d5', 'data_vg': 'ceph-7717ad38-094f-5aa6-8c39-f28029f817d5'})  2025-05-31 20:51:37.206522 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6fa9e552-f12f-547e-b45f-d034b93383af', 'data_vg': 'ceph-6fa9e552-f12f-547e-b45f-d034b93383af'})  2025-05-31 20:51:37.206792 | orchestrator | skipping: [testbed-node-4] 2025-05-31 20:51:37.207337 | orchestrator | 2025-05-31 20:51:37.207715 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-05-31 20:51:37.208272 | orchestrator | Saturday 31 May 2025 20:51:37 +0000 (0:00:00.144) 0:00:46.515 ********** 2025-05-31 20:51:37.363426 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7717ad38-094f-5aa6-8c39-f28029f817d5', 'data_vg': 'ceph-7717ad38-094f-5aa6-8c39-f28029f817d5'})  2025-05-31 20:51:37.363776 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-6fa9e552-f12f-547e-b45f-d034b93383af', 'data_vg': 'ceph-6fa9e552-f12f-547e-b45f-d034b93383af'})  2025-05-31 20:51:37.364776 | orchestrator | skipping: [testbed-node-4] 2025-05-31 20:51:37.366177 | orchestrator | 2025-05-31 20:51:37.366768 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-05-31 20:51:37.366951 | orchestrator | Saturday 31 May 2025 20:51:37 +0000 (0:00:00.151) 0:00:46.666 ********** 2025-05-31 20:51:37.843663 | orchestrator | ok: [testbed-node-4] => { 2025-05-31 20:51:37.843779 | orchestrator |  "lvm_report": { 2025-05-31 20:51:37.843935 | orchestrator |  "lv": [ 2025-05-31 20:51:37.844632 | orchestrator |  { 2025-05-31 20:51:37.844908 | orchestrator |  "lv_name": "osd-block-6fa9e552-f12f-547e-b45f-d034b93383af", 2025-05-31 20:51:37.845657 | orchestrator |  "vg_name": "ceph-6fa9e552-f12f-547e-b45f-d034b93383af" 2025-05-31 20:51:37.846277 | orchestrator |  }, 2025-05-31 20:51:37.846933 | orchestrator |  { 2025-05-31 20:51:37.847559 | orchestrator |  "lv_name": "osd-block-7717ad38-094f-5aa6-8c39-f28029f817d5", 2025-05-31 20:51:37.847583 | orchestrator |  "vg_name": "ceph-7717ad38-094f-5aa6-8c39-f28029f817d5" 2025-05-31 20:51:37.847841 | orchestrator |  } 2025-05-31 20:51:37.849355 | orchestrator |  ], 2025-05-31 20:51:37.849378 | orchestrator |  "pv": [ 2025-05-31 20:51:37.849392 | orchestrator |  { 2025-05-31 20:51:37.849411 | orchestrator |  "pv_name": "/dev/sdb", 2025-05-31 20:51:37.849951 | orchestrator |  "vg_name": "ceph-7717ad38-094f-5aa6-8c39-f28029f817d5" 2025-05-31 20:51:37.850267 | orchestrator |  }, 2025-05-31 20:51:37.850924 | orchestrator |  { 2025-05-31 20:51:37.851207 | orchestrator |  "pv_name": "/dev/sdc", 2025-05-31 20:51:37.851882 | orchestrator |  "vg_name": "ceph-6fa9e552-f12f-547e-b45f-d034b93383af" 2025-05-31 20:51:37.852613 | orchestrator |  } 2025-05-31 20:51:37.852802 | orchestrator |  ] 2025-05-31 20:51:37.852825 | orchestrator |  } 2025-05-31 20:51:37.852996 | orchestrator | } 2025-05-31 20:51:37.853353 | orchestrator | 2025-05-31 20:51:37.853814 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-05-31 20:51:37.854178 | orchestrator | 2025-05-31 20:51:37.854441 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-31 20:51:37.854712 | orchestrator | Saturday 31 May 2025 20:51:37 +0000 (0:00:00.486) 0:00:47.153 ********** 2025-05-31 20:51:38.082917 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-05-31 20:51:38.083062 | orchestrator | 2025-05-31 20:51:38.083659 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-31 20:51:38.084488 | orchestrator | Saturday 31 May 2025 20:51:38 +0000 (0:00:00.237) 0:00:47.391 ********** 2025-05-31 20:51:38.303750 | orchestrator | ok: [testbed-node-5] 2025-05-31 20:51:38.303851 | orchestrator | 2025-05-31 20:51:38.303865 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 20:51:38.304188 | orchestrator | Saturday 31 May 2025 20:51:38 +0000 (0:00:00.219) 0:00:47.610 ********** 2025-05-31 20:51:38.714125 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-05-31 20:51:38.714229 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-05-31 20:51:38.715117 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-05-31 20:51:38.715401 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-05-31 20:51:38.719310 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-05-31 20:51:38.719334 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-05-31 20:51:38.720558 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-05-31 20:51:38.722167 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-05-31 20:51:38.723926 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-05-31 20:51:38.724813 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-05-31 20:51:38.725497 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-05-31 20:51:38.726271 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-05-31 20:51:38.726781 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-05-31 20:51:38.727473 | orchestrator | 2025-05-31 20:51:38.728055 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 20:51:38.728476 | orchestrator | Saturday 31 May 2025 20:51:38 +0000 (0:00:00.409) 0:00:48.020 ********** 2025-05-31 20:51:38.907707 | orchestrator | skipping: [testbed-node-5] 2025-05-31 20:51:38.909593 | orchestrator | 2025-05-31 20:51:38.909625 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 20:51:38.910524 | orchestrator | Saturday 31 May 2025 20:51:38 +0000 (0:00:00.195) 0:00:48.216 ********** 2025-05-31 20:51:39.109278 | orchestrator | skipping: [testbed-node-5] 2025-05-31 20:51:39.109381 | orchestrator | 2025-05-31 20:51:39.109937 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 20:51:39.110753 | orchestrator | Saturday 31 May 2025 20:51:39 +0000 (0:00:00.199) 0:00:48.416 ********** 2025-05-31 20:51:39.298375 | orchestrator | skipping: [testbed-node-5] 2025-05-31 20:51:39.299014 | orchestrator | 2025-05-31 20:51:39.299814 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 20:51:39.300951 | orchestrator | Saturday 31 May 2025 20:51:39 +0000 (0:00:00.190) 0:00:48.607 ********** 2025-05-31 20:51:39.499972 | orchestrator | skipping: [testbed-node-5] 2025-05-31 20:51:39.500352 | orchestrator | 2025-05-31 20:51:39.501447 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 20:51:39.502003 | orchestrator | Saturday 31 May 2025 20:51:39 +0000 (0:00:00.201) 0:00:48.808 ********** 2025-05-31 20:51:39.678592 | orchestrator | skipping: [testbed-node-5] 2025-05-31 20:51:39.679066 | orchestrator | 2025-05-31 20:51:39.679910 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 20:51:39.680847 | orchestrator | Saturday 31 May 2025 20:51:39 +0000 (0:00:00.178) 0:00:48.987 ********** 2025-05-31 20:51:40.235857 | orchestrator | skipping: [testbed-node-5] 2025-05-31 20:51:40.236363 | orchestrator | 2025-05-31 20:51:40.237135 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 20:51:40.237946 | orchestrator | Saturday 31 May 2025 20:51:40 +0000 (0:00:00.557) 0:00:49.544 ********** 2025-05-31 20:51:40.433924 | orchestrator | skipping: [testbed-node-5] 2025-05-31 20:51:40.434405 | orchestrator | 2025-05-31 20:51:40.435230 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 20:51:40.435985 | orchestrator | Saturday 31 May 2025 20:51:40 +0000 (0:00:00.196) 0:00:49.741 ********** 2025-05-31 20:51:40.616888 | orchestrator | skipping: [testbed-node-5] 2025-05-31 20:51:40.617492 | orchestrator | 2025-05-31 20:51:40.617686 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 20:51:40.618460 | orchestrator | Saturday 31 May 2025 20:51:40 +0000 (0:00:00.184) 0:00:49.925 ********** 2025-05-31 20:51:41.031657 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_fbeeb0f2-d22c-4c2b-ae21-9c150f637ac0) 2025-05-31 20:51:41.031827 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_fbeeb0f2-d22c-4c2b-ae21-9c150f637ac0) 2025-05-31 20:51:41.032676 | orchestrator | 2025-05-31 20:51:41.033244 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 20:51:41.034118 | orchestrator | Saturday 31 May 2025 20:51:41 +0000 (0:00:00.413) 0:00:50.339 ********** 2025-05-31 20:51:41.447677 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_6d52f885-97ca-45c7-bd6a-7862e27ed465) 2025-05-31 20:51:41.448246 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_6d52f885-97ca-45c7-bd6a-7862e27ed465) 2025-05-31 20:51:41.449300 | orchestrator | 2025-05-31 20:51:41.450086 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 20:51:41.451361 | orchestrator | Saturday 31 May 2025 20:51:41 +0000 (0:00:00.414) 0:00:50.754 ********** 2025-05-31 20:51:41.876840 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_727d26bd-0ead-422c-920c-32fac6429b39) 2025-05-31 20:51:41.878181 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_727d26bd-0ead-422c-920c-32fac6429b39) 2025-05-31 20:51:41.878521 | orchestrator | 2025-05-31 20:51:41.879621 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 20:51:41.880235 | orchestrator | Saturday 31 May 2025 20:51:41 +0000 (0:00:00.429) 0:00:51.184 ********** 2025-05-31 20:51:42.288336 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_d4f6392d-f8e1-4809-8c10-779f08f2c642) 2025-05-31 20:51:42.288531 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_d4f6392d-f8e1-4809-8c10-779f08f2c642) 2025-05-31 20:51:42.289091 | orchestrator | 2025-05-31 20:51:42.290472 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-31 20:51:42.291580 | orchestrator | Saturday 31 May 2025 20:51:42 +0000 (0:00:00.411) 0:00:51.596 ********** 2025-05-31 20:51:42.611079 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-31 20:51:42.611727 | orchestrator | 2025-05-31 20:51:42.612820 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 20:51:42.613861 | orchestrator | Saturday 31 May 2025 20:51:42 +0000 (0:00:00.321) 0:00:51.918 ********** 2025-05-31 20:51:43.016374 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-05-31 20:51:43.016709 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-05-31 20:51:43.017687 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-05-31 20:51:43.018252 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-05-31 20:51:43.019272 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-05-31 20:51:43.019450 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-05-31 20:51:43.023024 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-05-31 20:51:43.023451 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-05-31 20:51:43.024209 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-05-31 20:51:43.024708 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-05-31 20:51:43.025160 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-05-31 20:51:43.025760 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-05-31 20:51:43.026431 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-05-31 20:51:43.027009 | orchestrator | 2025-05-31 20:51:43.027451 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 20:51:43.027807 | orchestrator | Saturday 31 May 2025 20:51:43 +0000 (0:00:00.406) 0:00:52.325 ********** 2025-05-31 20:51:43.199461 | orchestrator | skipping: [testbed-node-5] 2025-05-31 20:51:43.199676 | orchestrator | 2025-05-31 20:51:43.199760 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 20:51:43.201789 | orchestrator | Saturday 31 May 2025 20:51:43 +0000 (0:00:00.183) 0:00:52.508 ********** 2025-05-31 20:51:43.403595 | orchestrator | skipping: [testbed-node-5] 2025-05-31 20:51:43.403779 | orchestrator | 2025-05-31 20:51:43.403801 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 20:51:43.404625 | orchestrator | Saturday 31 May 2025 20:51:43 +0000 (0:00:00.203) 0:00:52.712 ********** 2025-05-31 20:51:43.990449 | orchestrator | skipping: [testbed-node-5] 2025-05-31 20:51:43.990670 | orchestrator | 2025-05-31 20:51:43.991922 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 20:51:43.993346 | orchestrator | Saturday 31 May 2025 20:51:43 +0000 (0:00:00.585) 0:00:53.298 ********** 2025-05-31 20:51:44.202468 | orchestrator | skipping: [testbed-node-5] 2025-05-31 20:51:44.203294 | orchestrator | 2025-05-31 20:51:44.204365 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 20:51:44.205225 | orchestrator | Saturday 31 May 2025 20:51:44 +0000 (0:00:00.212) 0:00:53.510 ********** 2025-05-31 20:51:44.401022 | orchestrator | skipping: [testbed-node-5] 2025-05-31 20:51:44.401225 | orchestrator | 2025-05-31 20:51:44.401657 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 20:51:44.402348 | orchestrator | Saturday 31 May 2025 20:51:44 +0000 (0:00:00.198) 0:00:53.709 ********** 2025-05-31 20:51:44.600440 | orchestrator | skipping: [testbed-node-5] 2025-05-31 20:51:44.600714 | orchestrator | 2025-05-31 20:51:44.601750 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 20:51:44.602004 | orchestrator | Saturday 31 May 2025 20:51:44 +0000 (0:00:00.199) 0:00:53.908 ********** 2025-05-31 20:51:44.799310 | orchestrator | skipping: [testbed-node-5] 2025-05-31 20:51:44.799663 | orchestrator | 2025-05-31 20:51:44.800313 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 20:51:44.801048 | orchestrator | Saturday 31 May 2025 20:51:44 +0000 (0:00:00.199) 0:00:54.107 ********** 2025-05-31 20:51:44.992076 | orchestrator | skipping: [testbed-node-5] 2025-05-31 20:51:44.992301 | orchestrator | 2025-05-31 20:51:44.992966 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 20:51:44.993643 | orchestrator | Saturday 31 May 2025 20:51:44 +0000 (0:00:00.192) 0:00:54.300 ********** 2025-05-31 20:51:45.615589 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-05-31 20:51:45.616320 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-05-31 20:51:45.617347 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-05-31 20:51:45.619469 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-05-31 20:51:45.619522 | orchestrator | 2025-05-31 20:51:45.620439 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 20:51:45.620496 | orchestrator | Saturday 31 May 2025 20:51:45 +0000 (0:00:00.622) 0:00:54.923 ********** 2025-05-31 20:51:45.822068 | orchestrator | skipping: [testbed-node-5] 2025-05-31 20:51:45.822723 | orchestrator | 2025-05-31 20:51:45.823424 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 20:51:45.823994 | orchestrator | Saturday 31 May 2025 20:51:45 +0000 (0:00:00.205) 0:00:55.129 ********** 2025-05-31 20:51:46.009257 | orchestrator | skipping: [testbed-node-5] 2025-05-31 20:51:46.010340 | orchestrator | 2025-05-31 20:51:46.010679 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 20:51:46.011942 | orchestrator | Saturday 31 May 2025 20:51:46 +0000 (0:00:00.188) 0:00:55.317 ********** 2025-05-31 20:51:46.246740 | orchestrator | skipping: [testbed-node-5] 2025-05-31 20:51:46.247727 | orchestrator | 2025-05-31 20:51:46.248484 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-31 20:51:46.249182 | orchestrator | Saturday 31 May 2025 20:51:46 +0000 (0:00:00.237) 0:00:55.555 ********** 2025-05-31 20:51:46.433668 | orchestrator | skipping: [testbed-node-5] 2025-05-31 20:51:46.433767 | orchestrator | 2025-05-31 20:51:46.434217 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-05-31 20:51:46.434948 | orchestrator | Saturday 31 May 2025 20:51:46 +0000 (0:00:00.186) 0:00:55.741 ********** 2025-05-31 20:51:46.753173 | orchestrator | skipping: [testbed-node-5] 2025-05-31 20:51:46.753919 | orchestrator | 2025-05-31 20:51:46.754761 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-05-31 20:51:46.755277 | orchestrator | Saturday 31 May 2025 20:51:46 +0000 (0:00:00.319) 0:00:56.061 ********** 2025-05-31 20:51:46.938006 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'edfa5e9a-3f1a-54c1-83f4-345bb781a14b'}}) 2025-05-31 20:51:46.938252 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a23536e0-7351-5f09-a3c0-98b1bc7f8fff'}}) 2025-05-31 20:51:46.938266 | orchestrator | 2025-05-31 20:51:46.938350 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-05-31 20:51:46.939164 | orchestrator | Saturday 31 May 2025 20:51:46 +0000 (0:00:00.183) 0:00:56.244 ********** 2025-05-31 20:51:48.899864 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-edfa5e9a-3f1a-54c1-83f4-345bb781a14b', 'data_vg': 'ceph-edfa5e9a-3f1a-54c1-83f4-345bb781a14b'}) 2025-05-31 20:51:48.899973 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-a23536e0-7351-5f09-a3c0-98b1bc7f8fff', 'data_vg': 'ceph-a23536e0-7351-5f09-a3c0-98b1bc7f8fff'}) 2025-05-31 20:51:48.900085 | orchestrator | 2025-05-31 20:51:48.900772 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-05-31 20:51:48.901348 | orchestrator | Saturday 31 May 2025 20:51:48 +0000 (0:00:01.960) 0:00:58.205 ********** 2025-05-31 20:51:49.045993 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-edfa5e9a-3f1a-54c1-83f4-345bb781a14b', 'data_vg': 'ceph-edfa5e9a-3f1a-54c1-83f4-345bb781a14b'})  2025-05-31 20:51:49.047979 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a23536e0-7351-5f09-a3c0-98b1bc7f8fff', 'data_vg': 'ceph-a23536e0-7351-5f09-a3c0-98b1bc7f8fff'})  2025-05-31 20:51:49.049319 | orchestrator | skipping: [testbed-node-5] 2025-05-31 20:51:49.050264 | orchestrator | 2025-05-31 20:51:49.051176 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-05-31 20:51:49.051302 | orchestrator | Saturday 31 May 2025 20:51:49 +0000 (0:00:00.148) 0:00:58.353 ********** 2025-05-31 20:51:50.301089 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-edfa5e9a-3f1a-54c1-83f4-345bb781a14b', 'data_vg': 'ceph-edfa5e9a-3f1a-54c1-83f4-345bb781a14b'}) 2025-05-31 20:51:50.301891 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-a23536e0-7351-5f09-a3c0-98b1bc7f8fff', 'data_vg': 'ceph-a23536e0-7351-5f09-a3c0-98b1bc7f8fff'}) 2025-05-31 20:51:50.304405 | orchestrator | 2025-05-31 20:51:50.304453 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-05-31 20:51:50.304466 | orchestrator | Saturday 31 May 2025 20:51:50 +0000 (0:00:01.254) 0:00:59.607 ********** 2025-05-31 20:51:50.457654 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-edfa5e9a-3f1a-54c1-83f4-345bb781a14b', 'data_vg': 'ceph-edfa5e9a-3f1a-54c1-83f4-345bb781a14b'})  2025-05-31 20:51:50.457759 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a23536e0-7351-5f09-a3c0-98b1bc7f8fff', 'data_vg': 'ceph-a23536e0-7351-5f09-a3c0-98b1bc7f8fff'})  2025-05-31 20:51:50.457775 | orchestrator | skipping: [testbed-node-5] 2025-05-31 20:51:50.458095 | orchestrator | 2025-05-31 20:51:50.458743 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-05-31 20:51:50.458965 | orchestrator | Saturday 31 May 2025 20:51:50 +0000 (0:00:00.157) 0:00:59.765 ********** 2025-05-31 20:51:50.592577 | orchestrator | skipping: [testbed-node-5] 2025-05-31 20:51:50.593376 | orchestrator | 2025-05-31 20:51:50.593408 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-05-31 20:51:50.595206 | orchestrator | Saturday 31 May 2025 20:51:50 +0000 (0:00:00.135) 0:00:59.901 ********** 2025-05-31 20:51:50.749362 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-edfa5e9a-3f1a-54c1-83f4-345bb781a14b', 'data_vg': 'ceph-edfa5e9a-3f1a-54c1-83f4-345bb781a14b'})  2025-05-31 20:51:50.749511 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a23536e0-7351-5f09-a3c0-98b1bc7f8fff', 'data_vg': 'ceph-a23536e0-7351-5f09-a3c0-98b1bc7f8fff'})  2025-05-31 20:51:50.750220 | orchestrator | skipping: [testbed-node-5] 2025-05-31 20:51:50.751360 | orchestrator | 2025-05-31 20:51:50.753486 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-05-31 20:51:50.753903 | orchestrator | Saturday 31 May 2025 20:51:50 +0000 (0:00:00.156) 0:01:00.057 ********** 2025-05-31 20:51:50.891440 | orchestrator | skipping: [testbed-node-5] 2025-05-31 20:51:50.891795 | orchestrator | 2025-05-31 20:51:50.893430 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-05-31 20:51:50.895475 | orchestrator | Saturday 31 May 2025 20:51:50 +0000 (0:00:00.142) 0:01:00.199 ********** 2025-05-31 20:51:51.070957 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-edfa5e9a-3f1a-54c1-83f4-345bb781a14b', 'data_vg': 'ceph-edfa5e9a-3f1a-54c1-83f4-345bb781a14b'})  2025-05-31 20:51:51.071189 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a23536e0-7351-5f09-a3c0-98b1bc7f8fff', 'data_vg': 'ceph-a23536e0-7351-5f09-a3c0-98b1bc7f8fff'})  2025-05-31 20:51:51.071936 | orchestrator | skipping: [testbed-node-5] 2025-05-31 20:51:51.072515 | orchestrator | 2025-05-31 20:51:51.072729 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-05-31 20:51:51.073150 | orchestrator | Saturday 31 May 2025 20:51:51 +0000 (0:00:00.178) 0:01:00.378 ********** 2025-05-31 20:51:51.204974 | orchestrator | skipping: [testbed-node-5] 2025-05-31 20:51:51.205240 | orchestrator | 2025-05-31 20:51:51.205891 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-05-31 20:51:51.206699 | orchestrator | Saturday 31 May 2025 20:51:51 +0000 (0:00:00.134) 0:01:00.513 ********** 2025-05-31 20:51:51.350594 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-edfa5e9a-3f1a-54c1-83f4-345bb781a14b', 'data_vg': 'ceph-edfa5e9a-3f1a-54c1-83f4-345bb781a14b'})  2025-05-31 20:51:51.350743 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a23536e0-7351-5f09-a3c0-98b1bc7f8fff', 'data_vg': 'ceph-a23536e0-7351-5f09-a3c0-98b1bc7f8fff'})  2025-05-31 20:51:51.351688 | orchestrator | skipping: [testbed-node-5] 2025-05-31 20:51:51.352310 | orchestrator | 2025-05-31 20:51:51.352882 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-05-31 20:51:51.353666 | orchestrator | Saturday 31 May 2025 20:51:51 +0000 (0:00:00.145) 0:01:00.659 ********** 2025-05-31 20:51:51.489510 | orchestrator | ok: [testbed-node-5] 2025-05-31 20:51:51.489755 | orchestrator | 2025-05-31 20:51:51.490762 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-05-31 20:51:51.491296 | orchestrator | Saturday 31 May 2025 20:51:51 +0000 (0:00:00.138) 0:01:00.798 ********** 2025-05-31 20:51:51.840721 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-edfa5e9a-3f1a-54c1-83f4-345bb781a14b', 'data_vg': 'ceph-edfa5e9a-3f1a-54c1-83f4-345bb781a14b'})  2025-05-31 20:51:51.840875 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a23536e0-7351-5f09-a3c0-98b1bc7f8fff', 'data_vg': 'ceph-a23536e0-7351-5f09-a3c0-98b1bc7f8fff'})  2025-05-31 20:51:51.841684 | orchestrator | skipping: [testbed-node-5] 2025-05-31 20:51:51.842325 | orchestrator | 2025-05-31 20:51:51.843390 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-05-31 20:51:51.843855 | orchestrator | Saturday 31 May 2025 20:51:51 +0000 (0:00:00.350) 0:01:01.148 ********** 2025-05-31 20:51:51.991665 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-edfa5e9a-3f1a-54c1-83f4-345bb781a14b', 'data_vg': 'ceph-edfa5e9a-3f1a-54c1-83f4-345bb781a14b'})  2025-05-31 20:51:51.991777 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a23536e0-7351-5f09-a3c0-98b1bc7f8fff', 'data_vg': 'ceph-a23536e0-7351-5f09-a3c0-98b1bc7f8fff'})  2025-05-31 20:51:51.992800 | orchestrator | skipping: [testbed-node-5] 2025-05-31 20:51:51.993225 | orchestrator | 2025-05-31 20:51:51.995620 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-05-31 20:51:51.995939 | orchestrator | Saturday 31 May 2025 20:51:51 +0000 (0:00:00.148) 0:01:01.297 ********** 2025-05-31 20:51:52.138192 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-edfa5e9a-3f1a-54c1-83f4-345bb781a14b', 'data_vg': 'ceph-edfa5e9a-3f1a-54c1-83f4-345bb781a14b'})  2025-05-31 20:51:52.139565 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a23536e0-7351-5f09-a3c0-98b1bc7f8fff', 'data_vg': 'ceph-a23536e0-7351-5f09-a3c0-98b1bc7f8fff'})  2025-05-31 20:51:52.140296 | orchestrator | skipping: [testbed-node-5] 2025-05-31 20:51:52.141376 | orchestrator | 2025-05-31 20:51:52.142106 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-05-31 20:51:52.142518 | orchestrator | Saturday 31 May 2025 20:51:52 +0000 (0:00:00.149) 0:01:01.447 ********** 2025-05-31 20:51:52.273501 | orchestrator | skipping: [testbed-node-5] 2025-05-31 20:51:52.274308 | orchestrator | 2025-05-31 20:51:52.275022 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-05-31 20:51:52.275345 | orchestrator | Saturday 31 May 2025 20:51:52 +0000 (0:00:00.134) 0:01:01.581 ********** 2025-05-31 20:51:52.415615 | orchestrator | skipping: [testbed-node-5] 2025-05-31 20:51:52.416424 | orchestrator | 2025-05-31 20:51:52.417438 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-05-31 20:51:52.418091 | orchestrator | Saturday 31 May 2025 20:51:52 +0000 (0:00:00.142) 0:01:01.723 ********** 2025-05-31 20:51:52.548498 | orchestrator | skipping: [testbed-node-5] 2025-05-31 20:51:52.549509 | orchestrator | 2025-05-31 20:51:52.550313 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-05-31 20:51:52.551004 | orchestrator | Saturday 31 May 2025 20:51:52 +0000 (0:00:00.132) 0:01:01.856 ********** 2025-05-31 20:51:52.715270 | orchestrator | ok: [testbed-node-5] => { 2025-05-31 20:51:52.715365 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-05-31 20:51:52.715378 | orchestrator | } 2025-05-31 20:51:52.715391 | orchestrator | 2025-05-31 20:51:52.715403 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-05-31 20:51:52.715415 | orchestrator | Saturday 31 May 2025 20:51:52 +0000 (0:00:00.161) 0:01:02.018 ********** 2025-05-31 20:51:52.845522 | orchestrator | ok: [testbed-node-5] => { 2025-05-31 20:51:52.845653 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-05-31 20:51:52.846004 | orchestrator | } 2025-05-31 20:51:52.846931 | orchestrator | 2025-05-31 20:51:52.849549 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-05-31 20:51:52.849572 | orchestrator | Saturday 31 May 2025 20:51:52 +0000 (0:00:00.134) 0:01:02.153 ********** 2025-05-31 20:51:52.976764 | orchestrator | ok: [testbed-node-5] => { 2025-05-31 20:51:52.977652 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-05-31 20:51:52.978792 | orchestrator | } 2025-05-31 20:51:52.979030 | orchestrator | 2025-05-31 20:51:52.980712 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-05-31 20:51:52.980745 | orchestrator | Saturday 31 May 2025 20:51:52 +0000 (0:00:00.132) 0:01:02.285 ********** 2025-05-31 20:51:53.491999 | orchestrator | ok: [testbed-node-5] 2025-05-31 20:51:53.492244 | orchestrator | 2025-05-31 20:51:53.492596 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-05-31 20:51:53.494245 | orchestrator | Saturday 31 May 2025 20:51:53 +0000 (0:00:00.514) 0:01:02.799 ********** 2025-05-31 20:51:53.979207 | orchestrator | ok: [testbed-node-5] 2025-05-31 20:51:53.979418 | orchestrator | 2025-05-31 20:51:53.980279 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-05-31 20:51:53.980729 | orchestrator | Saturday 31 May 2025 20:51:53 +0000 (0:00:00.488) 0:01:03.288 ********** 2025-05-31 20:51:54.489201 | orchestrator | ok: [testbed-node-5] 2025-05-31 20:51:54.490208 | orchestrator | 2025-05-31 20:51:54.490988 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-05-31 20:51:54.491723 | orchestrator | Saturday 31 May 2025 20:51:54 +0000 (0:00:00.506) 0:01:03.794 ********** 2025-05-31 20:51:54.819051 | orchestrator | ok: [testbed-node-5] 2025-05-31 20:51:54.819669 | orchestrator | 2025-05-31 20:51:54.820051 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-05-31 20:51:54.820910 | orchestrator | Saturday 31 May 2025 20:51:54 +0000 (0:00:00.332) 0:01:04.127 ********** 2025-05-31 20:51:54.927873 | orchestrator | skipping: [testbed-node-5] 2025-05-31 20:51:54.928214 | orchestrator | 2025-05-31 20:51:54.929425 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-05-31 20:51:54.929775 | orchestrator | Saturday 31 May 2025 20:51:54 +0000 (0:00:00.109) 0:01:04.236 ********** 2025-05-31 20:51:55.054754 | orchestrator | skipping: [testbed-node-5] 2025-05-31 20:51:55.055485 | orchestrator | 2025-05-31 20:51:55.055612 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-05-31 20:51:55.056507 | orchestrator | Saturday 31 May 2025 20:51:55 +0000 (0:00:00.126) 0:01:04.362 ********** 2025-05-31 20:51:55.181774 | orchestrator | ok: [testbed-node-5] => { 2025-05-31 20:51:55.182484 | orchestrator |  "vgs_report": { 2025-05-31 20:51:55.183378 | orchestrator |  "vg": [] 2025-05-31 20:51:55.184894 | orchestrator |  } 2025-05-31 20:51:55.185622 | orchestrator | } 2025-05-31 20:51:55.186246 | orchestrator | 2025-05-31 20:51:55.186883 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-05-31 20:51:55.187825 | orchestrator | Saturday 31 May 2025 20:51:55 +0000 (0:00:00.126) 0:01:04.489 ********** 2025-05-31 20:51:55.308773 | orchestrator | skipping: [testbed-node-5] 2025-05-31 20:51:55.309436 | orchestrator | 2025-05-31 20:51:55.311419 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-05-31 20:51:55.312354 | orchestrator | Saturday 31 May 2025 20:51:55 +0000 (0:00:00.127) 0:01:04.617 ********** 2025-05-31 20:51:55.434332 | orchestrator | skipping: [testbed-node-5] 2025-05-31 20:51:55.435033 | orchestrator | 2025-05-31 20:51:55.435987 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-05-31 20:51:55.436945 | orchestrator | Saturday 31 May 2025 20:51:55 +0000 (0:00:00.124) 0:01:04.741 ********** 2025-05-31 20:51:55.555229 | orchestrator | skipping: [testbed-node-5] 2025-05-31 20:51:55.556361 | orchestrator | 2025-05-31 20:51:55.557550 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-05-31 20:51:55.558887 | orchestrator | Saturday 31 May 2025 20:51:55 +0000 (0:00:00.120) 0:01:04.862 ********** 2025-05-31 20:51:55.694230 | orchestrator | skipping: [testbed-node-5] 2025-05-31 20:51:55.694851 | orchestrator | 2025-05-31 20:51:55.695885 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-05-31 20:51:55.696784 | orchestrator | Saturday 31 May 2025 20:51:55 +0000 (0:00:00.139) 0:01:05.002 ********** 2025-05-31 20:51:55.827456 | orchestrator | skipping: [testbed-node-5] 2025-05-31 20:51:55.828744 | orchestrator | 2025-05-31 20:51:55.829750 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-05-31 20:51:55.830984 | orchestrator | Saturday 31 May 2025 20:51:55 +0000 (0:00:00.132) 0:01:05.134 ********** 2025-05-31 20:51:55.969489 | orchestrator | skipping: [testbed-node-5] 2025-05-31 20:51:55.970532 | orchestrator | 2025-05-31 20:51:55.970934 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-05-31 20:51:55.972241 | orchestrator | Saturday 31 May 2025 20:51:55 +0000 (0:00:00.142) 0:01:05.277 ********** 2025-05-31 20:51:56.114538 | orchestrator | skipping: [testbed-node-5] 2025-05-31 20:51:56.115004 | orchestrator | 2025-05-31 20:51:56.115989 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-05-31 20:51:56.116522 | orchestrator | Saturday 31 May 2025 20:51:56 +0000 (0:00:00.142) 0:01:05.420 ********** 2025-05-31 20:51:56.249026 | orchestrator | skipping: [testbed-node-5] 2025-05-31 20:51:56.249223 | orchestrator | 2025-05-31 20:51:56.249872 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-05-31 20:51:56.250653 | orchestrator | Saturday 31 May 2025 20:51:56 +0000 (0:00:00.135) 0:01:05.555 ********** 2025-05-31 20:51:56.579958 | orchestrator | skipping: [testbed-node-5] 2025-05-31 20:51:56.580521 | orchestrator | 2025-05-31 20:51:56.581169 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-05-31 20:51:56.581856 | orchestrator | Saturday 31 May 2025 20:51:56 +0000 (0:00:00.332) 0:01:05.888 ********** 2025-05-31 20:51:56.713400 | orchestrator | skipping: [testbed-node-5] 2025-05-31 20:51:56.713646 | orchestrator | 2025-05-31 20:51:56.714803 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-05-31 20:51:56.715990 | orchestrator | Saturday 31 May 2025 20:51:56 +0000 (0:00:00.133) 0:01:06.021 ********** 2025-05-31 20:51:56.846878 | orchestrator | skipping: [testbed-node-5] 2025-05-31 20:51:56.847371 | orchestrator | 2025-05-31 20:51:56.847858 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-05-31 20:51:56.848591 | orchestrator | Saturday 31 May 2025 20:51:56 +0000 (0:00:00.133) 0:01:06.155 ********** 2025-05-31 20:51:56.980636 | orchestrator | skipping: [testbed-node-5] 2025-05-31 20:51:56.980794 | orchestrator | 2025-05-31 20:51:56.981818 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-05-31 20:51:56.982523 | orchestrator | Saturday 31 May 2025 20:51:56 +0000 (0:00:00.133) 0:01:06.289 ********** 2025-05-31 20:51:57.132566 | orchestrator | skipping: [testbed-node-5] 2025-05-31 20:51:57.133198 | orchestrator | 2025-05-31 20:51:57.133634 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-05-31 20:51:57.134868 | orchestrator | Saturday 31 May 2025 20:51:57 +0000 (0:00:00.149) 0:01:06.438 ********** 2025-05-31 20:51:57.281250 | orchestrator | skipping: [testbed-node-5] 2025-05-31 20:51:57.281748 | orchestrator | 2025-05-31 20:51:57.282519 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-05-31 20:51:57.283167 | orchestrator | Saturday 31 May 2025 20:51:57 +0000 (0:00:00.150) 0:01:06.589 ********** 2025-05-31 20:51:57.428625 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-edfa5e9a-3f1a-54c1-83f4-345bb781a14b', 'data_vg': 'ceph-edfa5e9a-3f1a-54c1-83f4-345bb781a14b'})  2025-05-31 20:51:57.428798 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a23536e0-7351-5f09-a3c0-98b1bc7f8fff', 'data_vg': 'ceph-a23536e0-7351-5f09-a3c0-98b1bc7f8fff'})  2025-05-31 20:51:57.430103 | orchestrator | skipping: [testbed-node-5] 2025-05-31 20:51:57.430815 | orchestrator | 2025-05-31 20:51:57.431378 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-05-31 20:51:57.431562 | orchestrator | Saturday 31 May 2025 20:51:57 +0000 (0:00:00.147) 0:01:06.737 ********** 2025-05-31 20:51:57.577530 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-edfa5e9a-3f1a-54c1-83f4-345bb781a14b', 'data_vg': 'ceph-edfa5e9a-3f1a-54c1-83f4-345bb781a14b'})  2025-05-31 20:51:57.578178 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a23536e0-7351-5f09-a3c0-98b1bc7f8fff', 'data_vg': 'ceph-a23536e0-7351-5f09-a3c0-98b1bc7f8fff'})  2025-05-31 20:51:57.579675 | orchestrator | skipping: [testbed-node-5] 2025-05-31 20:51:57.580533 | orchestrator | 2025-05-31 20:51:57.581650 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-05-31 20:51:57.582364 | orchestrator | Saturday 31 May 2025 20:51:57 +0000 (0:00:00.147) 0:01:06.885 ********** 2025-05-31 20:51:57.732197 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-edfa5e9a-3f1a-54c1-83f4-345bb781a14b', 'data_vg': 'ceph-edfa5e9a-3f1a-54c1-83f4-345bb781a14b'})  2025-05-31 20:51:57.732364 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a23536e0-7351-5f09-a3c0-98b1bc7f8fff', 'data_vg': 'ceph-a23536e0-7351-5f09-a3c0-98b1bc7f8fff'})  2025-05-31 20:51:57.732964 | orchestrator | skipping: [testbed-node-5] 2025-05-31 20:51:57.733617 | orchestrator | 2025-05-31 20:51:57.734333 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-05-31 20:51:57.736254 | orchestrator | Saturday 31 May 2025 20:51:57 +0000 (0:00:00.154) 0:01:07.040 ********** 2025-05-31 20:51:57.875936 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-edfa5e9a-3f1a-54c1-83f4-345bb781a14b', 'data_vg': 'ceph-edfa5e9a-3f1a-54c1-83f4-345bb781a14b'})  2025-05-31 20:51:57.876717 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a23536e0-7351-5f09-a3c0-98b1bc7f8fff', 'data_vg': 'ceph-a23536e0-7351-5f09-a3c0-98b1bc7f8fff'})  2025-05-31 20:51:57.877532 | orchestrator | skipping: [testbed-node-5] 2025-05-31 20:51:57.878762 | orchestrator | 2025-05-31 20:51:57.879393 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-05-31 20:51:57.880346 | orchestrator | Saturday 31 May 2025 20:51:57 +0000 (0:00:00.144) 0:01:07.184 ********** 2025-05-31 20:51:58.029382 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-edfa5e9a-3f1a-54c1-83f4-345bb781a14b', 'data_vg': 'ceph-edfa5e9a-3f1a-54c1-83f4-345bb781a14b'})  2025-05-31 20:51:58.029987 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a23536e0-7351-5f09-a3c0-98b1bc7f8fff', 'data_vg': 'ceph-a23536e0-7351-5f09-a3c0-98b1bc7f8fff'})  2025-05-31 20:51:58.031262 | orchestrator | skipping: [testbed-node-5] 2025-05-31 20:51:58.032109 | orchestrator | 2025-05-31 20:51:58.033612 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-05-31 20:51:58.034042 | orchestrator | Saturday 31 May 2025 20:51:58 +0000 (0:00:00.152) 0:01:07.337 ********** 2025-05-31 20:51:58.177419 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-edfa5e9a-3f1a-54c1-83f4-345bb781a14b', 'data_vg': 'ceph-edfa5e9a-3f1a-54c1-83f4-345bb781a14b'})  2025-05-31 20:51:58.177578 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a23536e0-7351-5f09-a3c0-98b1bc7f8fff', 'data_vg': 'ceph-a23536e0-7351-5f09-a3c0-98b1bc7f8fff'})  2025-05-31 20:51:58.178716 | orchestrator | skipping: [testbed-node-5] 2025-05-31 20:51:58.179368 | orchestrator | 2025-05-31 20:51:58.180188 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-05-31 20:51:58.180484 | orchestrator | Saturday 31 May 2025 20:51:58 +0000 (0:00:00.147) 0:01:07.484 ********** 2025-05-31 20:51:58.532550 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-edfa5e9a-3f1a-54c1-83f4-345bb781a14b', 'data_vg': 'ceph-edfa5e9a-3f1a-54c1-83f4-345bb781a14b'})  2025-05-31 20:51:58.533085 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a23536e0-7351-5f09-a3c0-98b1bc7f8fff', 'data_vg': 'ceph-a23536e0-7351-5f09-a3c0-98b1bc7f8fff'})  2025-05-31 20:51:58.533868 | orchestrator | skipping: [testbed-node-5] 2025-05-31 20:51:58.535517 | orchestrator | 2025-05-31 20:51:58.535617 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-05-31 20:51:58.536195 | orchestrator | Saturday 31 May 2025 20:51:58 +0000 (0:00:00.356) 0:01:07.840 ********** 2025-05-31 20:51:58.694328 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-edfa5e9a-3f1a-54c1-83f4-345bb781a14b', 'data_vg': 'ceph-edfa5e9a-3f1a-54c1-83f4-345bb781a14b'})  2025-05-31 20:51:58.694430 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a23536e0-7351-5f09-a3c0-98b1bc7f8fff', 'data_vg': 'ceph-a23536e0-7351-5f09-a3c0-98b1bc7f8fff'})  2025-05-31 20:51:58.695274 | orchestrator | skipping: [testbed-node-5] 2025-05-31 20:51:58.696303 | orchestrator | 2025-05-31 20:51:58.697087 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-05-31 20:51:58.698317 | orchestrator | Saturday 31 May 2025 20:51:58 +0000 (0:00:00.161) 0:01:08.002 ********** 2025-05-31 20:51:59.193859 | orchestrator | ok: [testbed-node-5] 2025-05-31 20:51:59.193985 | orchestrator | 2025-05-31 20:51:59.194217 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-05-31 20:51:59.194714 | orchestrator | Saturday 31 May 2025 20:51:59 +0000 (0:00:00.498) 0:01:08.500 ********** 2025-05-31 20:51:59.698959 | orchestrator | ok: [testbed-node-5] 2025-05-31 20:51:59.699032 | orchestrator | 2025-05-31 20:51:59.699895 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-05-31 20:51:59.701095 | orchestrator | Saturday 31 May 2025 20:51:59 +0000 (0:00:00.506) 0:01:09.006 ********** 2025-05-31 20:51:59.857688 | orchestrator | ok: [testbed-node-5] 2025-05-31 20:51:59.858631 | orchestrator | 2025-05-31 20:51:59.859710 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-05-31 20:51:59.860404 | orchestrator | Saturday 31 May 2025 20:51:59 +0000 (0:00:00.159) 0:01:09.166 ********** 2025-05-31 20:52:00.037240 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-a23536e0-7351-5f09-a3c0-98b1bc7f8fff', 'vg_name': 'ceph-a23536e0-7351-5f09-a3c0-98b1bc7f8fff'}) 2025-05-31 20:52:00.039373 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-edfa5e9a-3f1a-54c1-83f4-345bb781a14b', 'vg_name': 'ceph-edfa5e9a-3f1a-54c1-83f4-345bb781a14b'}) 2025-05-31 20:52:00.040099 | orchestrator | 2025-05-31 20:52:00.041298 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-05-31 20:52:00.042330 | orchestrator | Saturday 31 May 2025 20:52:00 +0000 (0:00:00.179) 0:01:09.345 ********** 2025-05-31 20:52:00.185805 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-edfa5e9a-3f1a-54c1-83f4-345bb781a14b', 'data_vg': 'ceph-edfa5e9a-3f1a-54c1-83f4-345bb781a14b'})  2025-05-31 20:52:00.186342 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a23536e0-7351-5f09-a3c0-98b1bc7f8fff', 'data_vg': 'ceph-a23536e0-7351-5f09-a3c0-98b1bc7f8fff'})  2025-05-31 20:52:00.187192 | orchestrator | skipping: [testbed-node-5] 2025-05-31 20:52:00.187920 | orchestrator | 2025-05-31 20:52:00.188841 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-05-31 20:52:00.189374 | orchestrator | Saturday 31 May 2025 20:52:00 +0000 (0:00:00.147) 0:01:09.492 ********** 2025-05-31 20:52:00.341095 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-edfa5e9a-3f1a-54c1-83f4-345bb781a14b', 'data_vg': 'ceph-edfa5e9a-3f1a-54c1-83f4-345bb781a14b'})  2025-05-31 20:52:00.341316 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a23536e0-7351-5f09-a3c0-98b1bc7f8fff', 'data_vg': 'ceph-a23536e0-7351-5f09-a3c0-98b1bc7f8fff'})  2025-05-31 20:52:00.342616 | orchestrator | skipping: [testbed-node-5] 2025-05-31 20:52:00.344672 | orchestrator | 2025-05-31 20:52:00.345476 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-05-31 20:52:00.346197 | orchestrator | Saturday 31 May 2025 20:52:00 +0000 (0:00:00.156) 0:01:09.649 ********** 2025-05-31 20:52:00.486512 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-edfa5e9a-3f1a-54c1-83f4-345bb781a14b', 'data_vg': 'ceph-edfa5e9a-3f1a-54c1-83f4-345bb781a14b'})  2025-05-31 20:52:00.486886 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a23536e0-7351-5f09-a3c0-98b1bc7f8fff', 'data_vg': 'ceph-a23536e0-7351-5f09-a3c0-98b1bc7f8fff'})  2025-05-31 20:52:00.487714 | orchestrator | skipping: [testbed-node-5] 2025-05-31 20:52:00.488366 | orchestrator | 2025-05-31 20:52:00.489320 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-05-31 20:52:00.490242 | orchestrator | Saturday 31 May 2025 20:52:00 +0000 (0:00:00.145) 0:01:09.794 ********** 2025-05-31 20:52:00.630558 | orchestrator | ok: [testbed-node-5] => { 2025-05-31 20:52:00.630776 | orchestrator |  "lvm_report": { 2025-05-31 20:52:00.631490 | orchestrator |  "lv": [ 2025-05-31 20:52:00.632235 | orchestrator |  { 2025-05-31 20:52:00.633036 | orchestrator |  "lv_name": "osd-block-a23536e0-7351-5f09-a3c0-98b1bc7f8fff", 2025-05-31 20:52:00.633998 | orchestrator |  "vg_name": "ceph-a23536e0-7351-5f09-a3c0-98b1bc7f8fff" 2025-05-31 20:52:00.634729 | orchestrator |  }, 2025-05-31 20:52:00.635572 | orchestrator |  { 2025-05-31 20:52:00.636247 | orchestrator |  "lv_name": "osd-block-edfa5e9a-3f1a-54c1-83f4-345bb781a14b", 2025-05-31 20:52:00.636976 | orchestrator |  "vg_name": "ceph-edfa5e9a-3f1a-54c1-83f4-345bb781a14b" 2025-05-31 20:52:00.638102 | orchestrator |  } 2025-05-31 20:52:00.638534 | orchestrator |  ], 2025-05-31 20:52:00.639042 | orchestrator |  "pv": [ 2025-05-31 20:52:00.639735 | orchestrator |  { 2025-05-31 20:52:00.640460 | orchestrator |  "pv_name": "/dev/sdb", 2025-05-31 20:52:00.640928 | orchestrator |  "vg_name": "ceph-edfa5e9a-3f1a-54c1-83f4-345bb781a14b" 2025-05-31 20:52:00.641874 | orchestrator |  }, 2025-05-31 20:52:00.642845 | orchestrator |  { 2025-05-31 20:52:00.643695 | orchestrator |  "pv_name": "/dev/sdc", 2025-05-31 20:52:00.644574 | orchestrator |  "vg_name": "ceph-a23536e0-7351-5f09-a3c0-98b1bc7f8fff" 2025-05-31 20:52:00.645553 | orchestrator |  } 2025-05-31 20:52:00.646108 | orchestrator |  ] 2025-05-31 20:52:00.646743 | orchestrator |  } 2025-05-31 20:52:00.647637 | orchestrator | } 2025-05-31 20:52:00.648668 | orchestrator | 2025-05-31 20:52:00.651528 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 20:52:00.651572 | orchestrator | 2025-05-31 20:52:00 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-31 20:52:00.651588 | orchestrator | 2025-05-31 20:52:00 | INFO  | Please wait and do not abort execution. 2025-05-31 20:52:00.652107 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-05-31 20:52:00.653194 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-05-31 20:52:00.653791 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-05-31 20:52:00.654836 | orchestrator | 2025-05-31 20:52:00.655481 | orchestrator | 2025-05-31 20:52:00.656069 | orchestrator | 2025-05-31 20:52:00.656877 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-31 20:52:00.657982 | orchestrator | Saturday 31 May 2025 20:52:00 +0000 (0:00:00.144) 0:01:09.939 ********** 2025-05-31 20:52:00.658982 | orchestrator | =============================================================================== 2025-05-31 20:52:00.660282 | orchestrator | Create block VGs -------------------------------------------------------- 6.19s 2025-05-31 20:52:00.661408 | orchestrator | Create block LVs -------------------------------------------------------- 3.87s 2025-05-31 20:52:00.662341 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.84s 2025-05-31 20:52:00.662946 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.51s 2025-05-31 20:52:00.663861 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.50s 2025-05-31 20:52:00.664317 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.47s 2025-05-31 20:52:00.664865 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.47s 2025-05-31 20:52:00.665548 | orchestrator | Add known partitions to the list of available block devices ------------- 1.40s 2025-05-31 20:52:00.666088 | orchestrator | Add known links to the list of available block devices ------------------ 1.20s 2025-05-31 20:52:00.666658 | orchestrator | Add known partitions to the list of available block devices ------------- 0.99s 2025-05-31 20:52:00.667460 | orchestrator | Print LVM report data --------------------------------------------------- 0.91s 2025-05-31 20:52:00.667947 | orchestrator | Add known partitions to the list of available block devices ------------- 0.80s 2025-05-31 20:52:00.668623 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.72s 2025-05-31 20:52:00.669066 | orchestrator | Add known links to the list of available block devices ------------------ 0.68s 2025-05-31 20:52:00.669513 | orchestrator | Get initial list of available block devices ----------------------------- 0.67s 2025-05-31 20:52:00.670172 | orchestrator | Create DB LVs for ceph_db_wal_devices ----------------------------------- 0.65s 2025-05-31 20:52:00.671021 | orchestrator | Count OSDs put on ceph_db_devices defined in lvm_volumes ---------------- 0.65s 2025-05-31 20:52:00.671236 | orchestrator | Fail if DB LV defined in lvm_volumes is missing ------------------------- 0.65s 2025-05-31 20:52:00.671647 | orchestrator | Add known partitions to the list of available block devices ------------- 0.64s 2025-05-31 20:52:00.672177 | orchestrator | Print 'Create WAL LVs for ceph_wal_devices' ----------------------------- 0.63s 2025-05-31 20:52:02.858685 | orchestrator | Registering Redlock._acquired_script 2025-05-31 20:52:02.858795 | orchestrator | Registering Redlock._extend_script 2025-05-31 20:52:02.858811 | orchestrator | Registering Redlock._release_script 2025-05-31 20:52:02.922651 | orchestrator | 2025-05-31 20:52:02 | INFO  | Task b5e6a6ee-b76f-4191-8d5a-ad2ae2996962 (facts) was prepared for execution. 2025-05-31 20:52:02.922732 | orchestrator | 2025-05-31 20:52:02 | INFO  | It takes a moment until task b5e6a6ee-b76f-4191-8d5a-ad2ae2996962 (facts) has been started and output is visible here. 2025-05-31 20:52:06.714481 | orchestrator | 2025-05-31 20:52:06.715097 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-05-31 20:52:06.716289 | orchestrator | 2025-05-31 20:52:06.717677 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-05-31 20:52:06.718913 | orchestrator | Saturday 31 May 2025 20:52:06 +0000 (0:00:00.206) 0:00:00.206 ********** 2025-05-31 20:52:08.030959 | orchestrator | ok: [testbed-manager] 2025-05-31 20:52:08.033740 | orchestrator | ok: [testbed-node-3] 2025-05-31 20:52:08.033792 | orchestrator | ok: [testbed-node-4] 2025-05-31 20:52:08.034157 | orchestrator | ok: [testbed-node-1] 2025-05-31 20:52:08.035756 | orchestrator | ok: [testbed-node-0] 2025-05-31 20:52:08.036983 | orchestrator | ok: [testbed-node-2] 2025-05-31 20:52:08.037992 | orchestrator | ok: [testbed-node-5] 2025-05-31 20:52:08.039023 | orchestrator | 2025-05-31 20:52:08.041621 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-05-31 20:52:08.041663 | orchestrator | Saturday 31 May 2025 20:52:08 +0000 (0:00:01.315) 0:00:01.522 ********** 2025-05-31 20:52:08.175102 | orchestrator | skipping: [testbed-manager] 2025-05-31 20:52:08.246602 | orchestrator | skipping: [testbed-node-0] 2025-05-31 20:52:08.316866 | orchestrator | skipping: [testbed-node-1] 2025-05-31 20:52:08.387311 | orchestrator | skipping: [testbed-node-2] 2025-05-31 20:52:08.457564 | orchestrator | skipping: [testbed-node-3] 2025-05-31 20:52:09.103294 | orchestrator | skipping: [testbed-node-4] 2025-05-31 20:52:09.103395 | orchestrator | skipping: [testbed-node-5] 2025-05-31 20:52:09.105402 | orchestrator | 2025-05-31 20:52:09.106419 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-31 20:52:09.106785 | orchestrator | 2025-05-31 20:52:09.107658 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-31 20:52:09.108363 | orchestrator | Saturday 31 May 2025 20:52:09 +0000 (0:00:01.073) 0:00:02.596 ********** 2025-05-31 20:52:13.707801 | orchestrator | ok: [testbed-node-2] 2025-05-31 20:52:13.707977 | orchestrator | ok: [testbed-node-1] 2025-05-31 20:52:13.709511 | orchestrator | ok: [testbed-node-0] 2025-05-31 20:52:13.710961 | orchestrator | ok: [testbed-manager] 2025-05-31 20:52:13.712617 | orchestrator | ok: [testbed-node-3] 2025-05-31 20:52:13.713506 | orchestrator | ok: [testbed-node-4] 2025-05-31 20:52:13.714679 | orchestrator | ok: [testbed-node-5] 2025-05-31 20:52:13.715690 | orchestrator | 2025-05-31 20:52:13.716894 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-05-31 20:52:13.717537 | orchestrator | 2025-05-31 20:52:13.718378 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-05-31 20:52:13.719386 | orchestrator | Saturday 31 May 2025 20:52:13 +0000 (0:00:04.605) 0:00:07.201 ********** 2025-05-31 20:52:13.864544 | orchestrator | skipping: [testbed-manager] 2025-05-31 20:52:13.942619 | orchestrator | skipping: [testbed-node-0] 2025-05-31 20:52:14.022427 | orchestrator | skipping: [testbed-node-1] 2025-05-31 20:52:14.100044 | orchestrator | skipping: [testbed-node-2] 2025-05-31 20:52:14.179383 | orchestrator | skipping: [testbed-node-3] 2025-05-31 20:52:14.222530 | orchestrator | skipping: [testbed-node-4] 2025-05-31 20:52:14.222618 | orchestrator | skipping: [testbed-node-5] 2025-05-31 20:52:14.222710 | orchestrator | 2025-05-31 20:52:14.223098 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 20:52:14.223389 | orchestrator | 2025-05-31 20:52:14 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-31 20:52:14.223765 | orchestrator | 2025-05-31 20:52:14 | INFO  | Please wait and do not abort execution. 2025-05-31 20:52:14.224547 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-31 20:52:14.224801 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-31 20:52:14.225389 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-31 20:52:14.225850 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-31 20:52:14.226449 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-31 20:52:14.227002 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-31 20:52:14.227430 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-31 20:52:14.227848 | orchestrator | 2025-05-31 20:52:14.228642 | orchestrator | 2025-05-31 20:52:14.228829 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-31 20:52:14.229142 | orchestrator | Saturday 31 May 2025 20:52:14 +0000 (0:00:00.516) 0:00:07.718 ********** 2025-05-31 20:52:14.229586 | orchestrator | =============================================================================== 2025-05-31 20:52:14.229970 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.61s 2025-05-31 20:52:14.230423 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.32s 2025-05-31 20:52:14.231125 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.07s 2025-05-31 20:52:14.231225 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.52s 2025-05-31 20:52:14.795839 | orchestrator | 2025-05-31 20:52:14.797513 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Sat May 31 20:52:14 UTC 2025 2025-05-31 20:52:14.797579 | orchestrator | 2025-05-31 20:52:16.423288 | orchestrator | 2025-05-31 20:52:16 | INFO  | Collection nutshell is prepared for execution 2025-05-31 20:52:16.423390 | orchestrator | 2025-05-31 20:52:16 | INFO  | D [0] - dotfiles 2025-05-31 20:52:16.428422 | orchestrator | Registering Redlock._acquired_script 2025-05-31 20:52:16.428515 | orchestrator | Registering Redlock._extend_script 2025-05-31 20:52:16.428529 | orchestrator | Registering Redlock._release_script 2025-05-31 20:52:16.435658 | orchestrator | 2025-05-31 20:52:16 | INFO  | D [0] - homer 2025-05-31 20:52:16.435719 | orchestrator | 2025-05-31 20:52:16 | INFO  | D [0] - netdata 2025-05-31 20:52:16.435731 | orchestrator | 2025-05-31 20:52:16 | INFO  | D [0] - openstackclient 2025-05-31 20:52:16.435743 | orchestrator | 2025-05-31 20:52:16 | INFO  | D [0] - phpmyadmin 2025-05-31 20:52:16.435754 | orchestrator | 2025-05-31 20:52:16 | INFO  | A [0] - common 2025-05-31 20:52:16.437870 | orchestrator | 2025-05-31 20:52:16 | INFO  | A [1] -- loadbalancer 2025-05-31 20:52:16.437924 | orchestrator | 2025-05-31 20:52:16 | INFO  | D [2] --- opensearch 2025-05-31 20:52:16.438521 | orchestrator | 2025-05-31 20:52:16 | INFO  | A [2] --- mariadb-ng 2025-05-31 20:52:16.438548 | orchestrator | 2025-05-31 20:52:16 | INFO  | D [3] ---- horizon 2025-05-31 20:52:16.438560 | orchestrator | 2025-05-31 20:52:16 | INFO  | A [3] ---- keystone 2025-05-31 20:52:16.438571 | orchestrator | 2025-05-31 20:52:16 | INFO  | A [4] ----- neutron 2025-05-31 20:52:16.438633 | orchestrator | 2025-05-31 20:52:16 | INFO  | D [5] ------ wait-for-nova 2025-05-31 20:52:16.438875 | orchestrator | 2025-05-31 20:52:16 | INFO  | A [5] ------ octavia 2025-05-31 20:52:16.439120 | orchestrator | 2025-05-31 20:52:16 | INFO  | D [4] ----- barbican 2025-05-31 20:52:16.439425 | orchestrator | 2025-05-31 20:52:16 | INFO  | D [4] ----- designate 2025-05-31 20:52:16.439447 | orchestrator | 2025-05-31 20:52:16 | INFO  | D [4] ----- ironic 2025-05-31 20:52:16.439711 | orchestrator | 2025-05-31 20:52:16 | INFO  | D [4] ----- placement 2025-05-31 20:52:16.439731 | orchestrator | 2025-05-31 20:52:16 | INFO  | D [4] ----- magnum 2025-05-31 20:52:16.440221 | orchestrator | 2025-05-31 20:52:16 | INFO  | A [1] -- openvswitch 2025-05-31 20:52:16.440647 | orchestrator | 2025-05-31 20:52:16 | INFO  | D [2] --- ovn 2025-05-31 20:52:16.440700 | orchestrator | 2025-05-31 20:52:16 | INFO  | D [1] -- memcached 2025-05-31 20:52:16.440809 | orchestrator | 2025-05-31 20:52:16 | INFO  | D [1] -- redis 2025-05-31 20:52:16.440826 | orchestrator | 2025-05-31 20:52:16 | INFO  | D [1] -- rabbitmq-ng 2025-05-31 20:52:16.440837 | orchestrator | 2025-05-31 20:52:16 | INFO  | A [0] - kubernetes 2025-05-31 20:52:16.442718 | orchestrator | 2025-05-31 20:52:16 | INFO  | D [1] -- kubeconfig 2025-05-31 20:52:16.442757 | orchestrator | 2025-05-31 20:52:16 | INFO  | A [1] -- copy-kubeconfig 2025-05-31 20:52:16.442834 | orchestrator | 2025-05-31 20:52:16 | INFO  | A [0] - ceph 2025-05-31 20:52:16.444437 | orchestrator | 2025-05-31 20:52:16 | INFO  | A [1] -- ceph-pools 2025-05-31 20:52:16.444548 | orchestrator | 2025-05-31 20:52:16 | INFO  | A [2] --- copy-ceph-keys 2025-05-31 20:52:16.444565 | orchestrator | 2025-05-31 20:52:16 | INFO  | A [3] ---- cephclient 2025-05-31 20:52:16.444576 | orchestrator | 2025-05-31 20:52:16 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-05-31 20:52:16.444665 | orchestrator | 2025-05-31 20:52:16 | INFO  | A [4] ----- wait-for-keystone 2025-05-31 20:52:16.444682 | orchestrator | 2025-05-31 20:52:16 | INFO  | D [5] ------ kolla-ceph-rgw 2025-05-31 20:52:16.444693 | orchestrator | 2025-05-31 20:52:16 | INFO  | D [5] ------ glance 2025-05-31 20:52:16.444903 | orchestrator | 2025-05-31 20:52:16 | INFO  | D [5] ------ cinder 2025-05-31 20:52:16.444924 | orchestrator | 2025-05-31 20:52:16 | INFO  | D [5] ------ nova 2025-05-31 20:52:16.445143 | orchestrator | 2025-05-31 20:52:16 | INFO  | A [4] ----- prometheus 2025-05-31 20:52:16.445324 | orchestrator | 2025-05-31 20:52:16 | INFO  | D [5] ------ grafana 2025-05-31 20:52:16.636694 | orchestrator | 2025-05-31 20:52:16 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-05-31 20:52:16.636788 | orchestrator | 2025-05-31 20:52:16 | INFO  | Tasks are running in the background 2025-05-31 20:52:19.177964 | orchestrator | 2025-05-31 20:52:19 | INFO  | No task IDs specified, wait for all currently running tasks 2025-05-31 20:52:21.318792 | orchestrator | 2025-05-31 20:52:21 | INFO  | Task de780906-fd56-4662-a3b4-6e7bcd0c6c91 is in state STARTED 2025-05-31 20:52:21.320773 | orchestrator | 2025-05-31 20:52:21 | INFO  | Task a79a1b73-0272-4f3e-bd51-e4a0c0574286 is in state STARTED 2025-05-31 20:52:21.322926 | orchestrator | 2025-05-31 20:52:21 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:52:21.328236 | orchestrator | 2025-05-31 20:52:21 | INFO  | Task 7ef79ff6-cae0-4999-a09d-2e03eaa0de98 is in state STARTED 2025-05-31 20:52:21.330245 | orchestrator | 2025-05-31 20:52:21 | INFO  | Task 71c06ffc-5872-4e70-bcb8-0731f25cc20a is in state STARTED 2025-05-31 20:52:21.331084 | orchestrator | 2025-05-31 20:52:21 | INFO  | Task 66d4f3d3-efcd-4308-b898-04305870110d is in state STARTED 2025-05-31 20:52:21.331966 | orchestrator | 2025-05-31 20:52:21 | INFO  | Task 548ac3b1-5047-417d-b1ba-517bdaea8bf1 is in state STARTED 2025-05-31 20:52:21.332105 | orchestrator | 2025-05-31 20:52:21 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:52:24.373725 | orchestrator | 2025-05-31 20:52:24 | INFO  | Task de780906-fd56-4662-a3b4-6e7bcd0c6c91 is in state STARTED 2025-05-31 20:52:24.373934 | orchestrator | 2025-05-31 20:52:24 | INFO  | Task a79a1b73-0272-4f3e-bd51-e4a0c0574286 is in state STARTED 2025-05-31 20:52:24.374631 | orchestrator | 2025-05-31 20:52:24 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:52:24.375964 | orchestrator | 2025-05-31 20:52:24 | INFO  | Task 7ef79ff6-cae0-4999-a09d-2e03eaa0de98 is in state STARTED 2025-05-31 20:52:24.379788 | orchestrator | 2025-05-31 20:52:24 | INFO  | Task 71c06ffc-5872-4e70-bcb8-0731f25cc20a is in state STARTED 2025-05-31 20:52:24.379865 | orchestrator | 2025-05-31 20:52:24 | INFO  | Task 66d4f3d3-efcd-4308-b898-04305870110d is in state STARTED 2025-05-31 20:52:24.379882 | orchestrator | 2025-05-31 20:52:24 | INFO  | Task 548ac3b1-5047-417d-b1ba-517bdaea8bf1 is in state STARTED 2025-05-31 20:52:24.382541 | orchestrator | 2025-05-31 20:52:24 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:52:27.412125 | orchestrator | 2025-05-31 20:52:27 | INFO  | Task de780906-fd56-4662-a3b4-6e7bcd0c6c91 is in state STARTED 2025-05-31 20:52:27.412358 | orchestrator | 2025-05-31 20:52:27 | INFO  | Task a79a1b73-0272-4f3e-bd51-e4a0c0574286 is in state STARTED 2025-05-31 20:52:27.412665 | orchestrator | 2025-05-31 20:52:27 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:52:27.413397 | orchestrator | 2025-05-31 20:52:27 | INFO  | Task 7ef79ff6-cae0-4999-a09d-2e03eaa0de98 is in state STARTED 2025-05-31 20:52:27.418699 | orchestrator | 2025-05-31 20:52:27 | INFO  | Task 71c06ffc-5872-4e70-bcb8-0731f25cc20a is in state STARTED 2025-05-31 20:52:27.418751 | orchestrator | 2025-05-31 20:52:27 | INFO  | Task 66d4f3d3-efcd-4308-b898-04305870110d is in state STARTED 2025-05-31 20:52:27.418791 | orchestrator | 2025-05-31 20:52:27 | INFO  | Task 548ac3b1-5047-417d-b1ba-517bdaea8bf1 is in state STARTED 2025-05-31 20:52:27.418804 | orchestrator | 2025-05-31 20:52:27 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:52:30.472744 | orchestrator | 2025-05-31 20:52:30 | INFO  | Task de780906-fd56-4662-a3b4-6e7bcd0c6c91 is in state STARTED 2025-05-31 20:52:30.473126 | orchestrator | 2025-05-31 20:52:30 | INFO  | Task a79a1b73-0272-4f3e-bd51-e4a0c0574286 is in state STARTED 2025-05-31 20:52:30.473714 | orchestrator | 2025-05-31 20:52:30 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:52:30.474514 | orchestrator | 2025-05-31 20:52:30 | INFO  | Task 7ef79ff6-cae0-4999-a09d-2e03eaa0de98 is in state STARTED 2025-05-31 20:52:30.477077 | orchestrator | 2025-05-31 20:52:30 | INFO  | Task 71c06ffc-5872-4e70-bcb8-0731f25cc20a is in state STARTED 2025-05-31 20:52:30.478122 | orchestrator | 2025-05-31 20:52:30 | INFO  | Task 66d4f3d3-efcd-4308-b898-04305870110d is in state STARTED 2025-05-31 20:52:30.484478 | orchestrator | 2025-05-31 20:52:30 | INFO  | Task 548ac3b1-5047-417d-b1ba-517bdaea8bf1 is in state STARTED 2025-05-31 20:52:30.484525 | orchestrator | 2025-05-31 20:52:30 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:52:33.540130 | orchestrator | 2025-05-31 20:52:33 | INFO  | Task de780906-fd56-4662-a3b4-6e7bcd0c6c91 is in state STARTED 2025-05-31 20:52:33.540304 | orchestrator | 2025-05-31 20:52:33 | INFO  | Task a79a1b73-0272-4f3e-bd51-e4a0c0574286 is in state STARTED 2025-05-31 20:52:33.540914 | orchestrator | 2025-05-31 20:52:33 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:52:33.540943 | orchestrator | 2025-05-31 20:52:33 | INFO  | Task 7ef79ff6-cae0-4999-a09d-2e03eaa0de98 is in state STARTED 2025-05-31 20:52:33.541433 | orchestrator | 2025-05-31 20:52:33 | INFO  | Task 71c06ffc-5872-4e70-bcb8-0731f25cc20a is in state STARTED 2025-05-31 20:52:33.542090 | orchestrator | 2025-05-31 20:52:33 | INFO  | Task 66d4f3d3-efcd-4308-b898-04305870110d is in state STARTED 2025-05-31 20:52:33.542595 | orchestrator | 2025-05-31 20:52:33 | INFO  | Task 548ac3b1-5047-417d-b1ba-517bdaea8bf1 is in state STARTED 2025-05-31 20:52:33.542698 | orchestrator | 2025-05-31 20:52:33 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:52:36.604730 | orchestrator | 2025-05-31 20:52:36 | INFO  | Task de780906-fd56-4662-a3b4-6e7bcd0c6c91 is in state STARTED 2025-05-31 20:52:36.605930 | orchestrator | 2025-05-31 20:52:36 | INFO  | Task a79a1b73-0272-4f3e-bd51-e4a0c0574286 is in state STARTED 2025-05-31 20:52:36.607704 | orchestrator | 2025-05-31 20:52:36 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:52:36.610433 | orchestrator | 2025-05-31 20:52:36 | INFO  | Task 7ef79ff6-cae0-4999-a09d-2e03eaa0de98 is in state STARTED 2025-05-31 20:52:36.614087 | orchestrator | 2025-05-31 20:52:36 | INFO  | Task 71c06ffc-5872-4e70-bcb8-0731f25cc20a is in state STARTED 2025-05-31 20:52:36.626284 | orchestrator | 2025-05-31 20:52:36 | INFO  | Task 66d4f3d3-efcd-4308-b898-04305870110d is in state STARTED 2025-05-31 20:52:36.626345 | orchestrator | 2025-05-31 20:52:36 | INFO  | Task 548ac3b1-5047-417d-b1ba-517bdaea8bf1 is in state STARTED 2025-05-31 20:52:36.626352 | orchestrator | 2025-05-31 20:52:36 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:52:39.680844 | orchestrator | 2025-05-31 20:52:39 | INFO  | Task de780906-fd56-4662-a3b4-6e7bcd0c6c91 is in state STARTED 2025-05-31 20:52:39.682110 | orchestrator | 2025-05-31 20:52:39 | INFO  | Task a79a1b73-0272-4f3e-bd51-e4a0c0574286 is in state STARTED 2025-05-31 20:52:39.682662 | orchestrator | 2025-05-31 20:52:39 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:52:39.686070 | orchestrator | 2025-05-31 20:52:39 | INFO  | Task 7ef79ff6-cae0-4999-a09d-2e03eaa0de98 is in state STARTED 2025-05-31 20:52:39.686102 | orchestrator | 2025-05-31 20:52:39 | INFO  | Task 71c06ffc-5872-4e70-bcb8-0731f25cc20a is in state STARTED 2025-05-31 20:52:39.688047 | orchestrator | 2025-05-31 20:52:39 | INFO  | Task 66d4f3d3-efcd-4308-b898-04305870110d is in state STARTED 2025-05-31 20:52:39.688555 | orchestrator | 2025-05-31 20:52:39 | INFO  | Task 548ac3b1-5047-417d-b1ba-517bdaea8bf1 is in state STARTED 2025-05-31 20:52:39.688589 | orchestrator | 2025-05-31 20:52:39 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:52:42.738552 | orchestrator | 2025-05-31 20:52:42 | INFO  | Task de780906-fd56-4662-a3b4-6e7bcd0c6c91 is in state STARTED 2025-05-31 20:52:42.740482 | orchestrator | 2025-05-31 20:52:42 | INFO  | Task a79a1b73-0272-4f3e-bd51-e4a0c0574286 is in state STARTED 2025-05-31 20:52:42.743647 | orchestrator | 2025-05-31 20:52:42 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:52:42.744826 | orchestrator | 2025-05-31 20:52:42 | INFO  | Task 7ef79ff6-cae0-4999-a09d-2e03eaa0de98 is in state STARTED 2025-05-31 20:52:42.748321 | orchestrator | 2025-05-31 20:52:42 | INFO  | Task 71c06ffc-5872-4e70-bcb8-0731f25cc20a is in state STARTED 2025-05-31 20:52:42.755753 | orchestrator | 2025-05-31 20:52:42 | INFO  | Task 66d4f3d3-efcd-4308-b898-04305870110d is in state STARTED 2025-05-31 20:52:42.756583 | orchestrator | 2025-05-31 20:52:42 | INFO  | Task 548ac3b1-5047-417d-b1ba-517bdaea8bf1 is in state STARTED 2025-05-31 20:52:42.757494 | orchestrator | 2025-05-31 20:52:42 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:52:45.837016 | orchestrator | 2025-05-31 20:52:45 | INFO  | Task de780906-fd56-4662-a3b4-6e7bcd0c6c91 is in state STARTED 2025-05-31 20:52:45.837126 | orchestrator | 2025-05-31 20:52:45 | INFO  | Task a79a1b73-0272-4f3e-bd51-e4a0c0574286 is in state STARTED 2025-05-31 20:52:45.837141 | orchestrator | 2025-05-31 20:52:45 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:52:45.840731 | orchestrator | 2025-05-31 20:52:45.840839 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-05-31 20:52:45.840861 | orchestrator | 2025-05-31 20:52:45.840917 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-05-31 20:52:45.840930 | orchestrator | Saturday 31 May 2025 20:52:27 +0000 (0:00:00.462) 0:00:00.462 ********** 2025-05-31 20:52:45.840941 | orchestrator | changed: [testbed-node-1] 2025-05-31 20:52:45.840953 | orchestrator | changed: [testbed-node-3] 2025-05-31 20:52:45.840965 | orchestrator | changed: [testbed-manager] 2025-05-31 20:52:45.840976 | orchestrator | changed: [testbed-node-0] 2025-05-31 20:52:45.840987 | orchestrator | changed: [testbed-node-4] 2025-05-31 20:52:45.840998 | orchestrator | changed: [testbed-node-2] 2025-05-31 20:52:45.841008 | orchestrator | changed: [testbed-node-5] 2025-05-31 20:52:45.841019 | orchestrator | 2025-05-31 20:52:45.841031 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-05-31 20:52:45.841042 | orchestrator | Saturday 31 May 2025 20:52:31 +0000 (0:00:03.695) 0:00:04.157 ********** 2025-05-31 20:52:45.841053 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-05-31 20:52:45.841064 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-05-31 20:52:45.841075 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-05-31 20:52:45.841086 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-05-31 20:52:45.841096 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-05-31 20:52:45.841107 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-05-31 20:52:45.841139 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-05-31 20:52:45.841150 | orchestrator | 2025-05-31 20:52:45.841161 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-05-31 20:52:45.841172 | orchestrator | Saturday 31 May 2025 20:52:33 +0000 (0:00:02.472) 0:00:06.630 ********** 2025-05-31 20:52:45.841189 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-31 20:52:32.638908', 'end': '2025-05-31 20:52:32.643791', 'delta': '0:00:00.004883', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-31 20:52:45.841204 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-31 20:52:32.584496', 'end': '2025-05-31 20:52:32.593006', 'delta': '0:00:00.008510', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-31 20:52:45.841216 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-31 20:52:32.590037', 'end': '2025-05-31 20:52:32.600142', 'delta': '0:00:00.010105', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-31 20:52:45.841339 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-31 20:52:32.902672', 'end': '2025-05-31 20:52:32.913365', 'delta': '0:00:00.010693', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-31 20:52:45.841356 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-31 20:52:33.200253', 'end': '2025-05-31 20:52:33.208280', 'delta': '0:00:00.008027', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-31 20:52:45.841377 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-31 20:52:33.497223', 'end': '2025-05-31 20:52:33.504900', 'delta': '0:00:00.007677', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-31 20:52:45.841390 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-31 20:52:33.671768', 'end': '2025-05-31 20:52:33.680345', 'delta': '0:00:00.008577', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-31 20:52:45.841403 | orchestrator | 2025-05-31 20:52:45.841416 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2025-05-31 20:52:45.841428 | orchestrator | Saturday 31 May 2025 20:52:36 +0000 (0:00:02.705) 0:00:09.336 ********** 2025-05-31 20:52:45.841440 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-05-31 20:52:45.841453 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-05-31 20:52:45.841465 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-05-31 20:52:45.841477 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-05-31 20:52:45.841489 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-05-31 20:52:45.841501 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-05-31 20:52:45.841513 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-05-31 20:52:45.841525 | orchestrator | 2025-05-31 20:52:45.841537 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-05-31 20:52:45.841549 | orchestrator | Saturday 31 May 2025 20:52:38 +0000 (0:00:02.287) 0:00:11.624 ********** 2025-05-31 20:52:45.841562 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-05-31 20:52:45.841574 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-05-31 20:52:45.841587 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-05-31 20:52:45.841607 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-05-31 20:52:45.841625 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-05-31 20:52:45.841643 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-05-31 20:52:45.841662 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-05-31 20:52:45.841680 | orchestrator | 2025-05-31 20:52:45.841699 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 20:52:45.841735 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 20:52:45.841765 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 20:52:45.841776 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 20:52:45.841787 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 20:52:45.841798 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 20:52:45.841809 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 20:52:45.841855 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 20:52:45.841869 | orchestrator | 2025-05-31 20:52:45.841880 | orchestrator | 2025-05-31 20:52:45.841891 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-31 20:52:45.841901 | orchestrator | Saturday 31 May 2025 20:52:42 +0000 (0:00:03.814) 0:00:15.439 ********** 2025-05-31 20:52:45.841939 | orchestrator | =============================================================================== 2025-05-31 20:52:45.841950 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 3.81s 2025-05-31 20:52:45.841961 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 3.70s 2025-05-31 20:52:45.841971 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.71s 2025-05-31 20:52:45.841982 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 2.47s 2025-05-31 20:52:45.841992 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 2.29s 2025-05-31 20:52:45.842097 | orchestrator | 2025-05-31 20:52:45 | INFO  | Task 9e7110b6-7168-4ac4-a78a-010bf834459f is in state STARTED 2025-05-31 20:52:45.842115 | orchestrator | 2025-05-31 20:52:45 | INFO  | Task 7ef79ff6-cae0-4999-a09d-2e03eaa0de98 is in state SUCCESS 2025-05-31 20:52:45.842127 | orchestrator | 2025-05-31 20:52:45 | INFO  | Task 71c06ffc-5872-4e70-bcb8-0731f25cc20a is in state STARTED 2025-05-31 20:52:45.842244 | orchestrator | 2025-05-31 20:52:45 | INFO  | Task 66d4f3d3-efcd-4308-b898-04305870110d is in state STARTED 2025-05-31 20:52:45.842285 | orchestrator | 2025-05-31 20:52:45 | INFO  | Task 548ac3b1-5047-417d-b1ba-517bdaea8bf1 is in state STARTED 2025-05-31 20:52:45.842298 | orchestrator | 2025-05-31 20:52:45 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:52:48.889437 | orchestrator | 2025-05-31 20:52:48 | INFO  | Task de780906-fd56-4662-a3b4-6e7bcd0c6c91 is in state STARTED 2025-05-31 20:52:48.889995 | orchestrator | 2025-05-31 20:52:48 | INFO  | Task a79a1b73-0272-4f3e-bd51-e4a0c0574286 is in state STARTED 2025-05-31 20:52:48.891005 | orchestrator | 2025-05-31 20:52:48 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:52:48.892028 | orchestrator | 2025-05-31 20:52:48 | INFO  | Task 9e7110b6-7168-4ac4-a78a-010bf834459f is in state STARTED 2025-05-31 20:52:48.893850 | orchestrator | 2025-05-31 20:52:48 | INFO  | Task 71c06ffc-5872-4e70-bcb8-0731f25cc20a is in state STARTED 2025-05-31 20:52:48.897422 | orchestrator | 2025-05-31 20:52:48 | INFO  | Task 66d4f3d3-efcd-4308-b898-04305870110d is in state STARTED 2025-05-31 20:52:48.898891 | orchestrator | 2025-05-31 20:52:48 | INFO  | Task 548ac3b1-5047-417d-b1ba-517bdaea8bf1 is in state STARTED 2025-05-31 20:52:48.899006 | orchestrator | 2025-05-31 20:52:48 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:52:51.930329 | orchestrator | 2025-05-31 20:52:51 | INFO  | Task de780906-fd56-4662-a3b4-6e7bcd0c6c91 is in state STARTED 2025-05-31 20:52:51.930648 | orchestrator | 2025-05-31 20:52:51 | INFO  | Task a79a1b73-0272-4f3e-bd51-e4a0c0574286 is in state STARTED 2025-05-31 20:52:51.934826 | orchestrator | 2025-05-31 20:52:51 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:52:51.934908 | orchestrator | 2025-05-31 20:52:51 | INFO  | Task 9e7110b6-7168-4ac4-a78a-010bf834459f is in state STARTED 2025-05-31 20:52:51.934922 | orchestrator | 2025-05-31 20:52:51 | INFO  | Task 71c06ffc-5872-4e70-bcb8-0731f25cc20a is in state STARTED 2025-05-31 20:52:51.935485 | orchestrator | 2025-05-31 20:52:51 | INFO  | Task 66d4f3d3-efcd-4308-b898-04305870110d is in state STARTED 2025-05-31 20:52:51.937133 | orchestrator | 2025-05-31 20:52:51 | INFO  | Task 548ac3b1-5047-417d-b1ba-517bdaea8bf1 is in state STARTED 2025-05-31 20:52:51.937160 | orchestrator | 2025-05-31 20:52:51 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:52:55.012867 | orchestrator | 2025-05-31 20:52:55 | INFO  | Task de780906-fd56-4662-a3b4-6e7bcd0c6c91 is in state STARTED 2025-05-31 20:52:55.014419 | orchestrator | 2025-05-31 20:52:55 | INFO  | Task a79a1b73-0272-4f3e-bd51-e4a0c0574286 is in state STARTED 2025-05-31 20:52:55.015353 | orchestrator | 2025-05-31 20:52:55 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:52:55.016477 | orchestrator | 2025-05-31 20:52:55 | INFO  | Task 9e7110b6-7168-4ac4-a78a-010bf834459f is in state STARTED 2025-05-31 20:52:55.016996 | orchestrator | 2025-05-31 20:52:55 | INFO  | Task 71c06ffc-5872-4e70-bcb8-0731f25cc20a is in state STARTED 2025-05-31 20:52:55.018832 | orchestrator | 2025-05-31 20:52:55 | INFO  | Task 66d4f3d3-efcd-4308-b898-04305870110d is in state STARTED 2025-05-31 20:52:55.019917 | orchestrator | 2025-05-31 20:52:55 | INFO  | Task 548ac3b1-5047-417d-b1ba-517bdaea8bf1 is in state STARTED 2025-05-31 20:52:55.020013 | orchestrator | 2025-05-31 20:52:55 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:52:58.067600 | orchestrator | 2025-05-31 20:52:58 | INFO  | Task de780906-fd56-4662-a3b4-6e7bcd0c6c91 is in state STARTED 2025-05-31 20:52:58.069263 | orchestrator | 2025-05-31 20:52:58 | INFO  | Task a79a1b73-0272-4f3e-bd51-e4a0c0574286 is in state STARTED 2025-05-31 20:52:58.074833 | orchestrator | 2025-05-31 20:52:58 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:52:58.076913 | orchestrator | 2025-05-31 20:52:58 | INFO  | Task 9e7110b6-7168-4ac4-a78a-010bf834459f is in state STARTED 2025-05-31 20:52:58.080420 | orchestrator | 2025-05-31 20:52:58 | INFO  | Task 71c06ffc-5872-4e70-bcb8-0731f25cc20a is in state STARTED 2025-05-31 20:52:58.084449 | orchestrator | 2025-05-31 20:52:58 | INFO  | Task 66d4f3d3-efcd-4308-b898-04305870110d is in state STARTED 2025-05-31 20:52:58.087195 | orchestrator | 2025-05-31 20:52:58 | INFO  | Task 548ac3b1-5047-417d-b1ba-517bdaea8bf1 is in state STARTED 2025-05-31 20:52:58.087228 | orchestrator | 2025-05-31 20:52:58 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:53:01.131316 | orchestrator | 2025-05-31 20:53:01 | INFO  | Task de780906-fd56-4662-a3b4-6e7bcd0c6c91 is in state STARTED 2025-05-31 20:53:01.131467 | orchestrator | 2025-05-31 20:53:01 | INFO  | Task a79a1b73-0272-4f3e-bd51-e4a0c0574286 is in state STARTED 2025-05-31 20:53:01.131558 | orchestrator | 2025-05-31 20:53:01 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:53:01.133027 | orchestrator | 2025-05-31 20:53:01 | INFO  | Task 9e7110b6-7168-4ac4-a78a-010bf834459f is in state STARTED 2025-05-31 20:53:01.133071 | orchestrator | 2025-05-31 20:53:01 | INFO  | Task 71c06ffc-5872-4e70-bcb8-0731f25cc20a is in state STARTED 2025-05-31 20:53:01.133807 | orchestrator | 2025-05-31 20:53:01 | INFO  | Task 66d4f3d3-efcd-4308-b898-04305870110d is in state STARTED 2025-05-31 20:53:01.134624 | orchestrator | 2025-05-31 20:53:01 | INFO  | Task 548ac3b1-5047-417d-b1ba-517bdaea8bf1 is in state STARTED 2025-05-31 20:53:01.134654 | orchestrator | 2025-05-31 20:53:01 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:53:04.192758 | orchestrator | 2025-05-31 20:53:04 | INFO  | Task de780906-fd56-4662-a3b4-6e7bcd0c6c91 is in state STARTED 2025-05-31 20:53:04.192900 | orchestrator | 2025-05-31 20:53:04 | INFO  | Task a79a1b73-0272-4f3e-bd51-e4a0c0574286 is in state SUCCESS 2025-05-31 20:53:04.195355 | orchestrator | 2025-05-31 20:53:04 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:53:04.198838 | orchestrator | 2025-05-31 20:53:04 | INFO  | Task 9e7110b6-7168-4ac4-a78a-010bf834459f is in state STARTED 2025-05-31 20:53:04.202512 | orchestrator | 2025-05-31 20:53:04 | INFO  | Task 71c06ffc-5872-4e70-bcb8-0731f25cc20a is in state STARTED 2025-05-31 20:53:04.205382 | orchestrator | 2025-05-31 20:53:04 | INFO  | Task 66d4f3d3-efcd-4308-b898-04305870110d is in state STARTED 2025-05-31 20:53:04.207135 | orchestrator | 2025-05-31 20:53:04 | INFO  | Task 548ac3b1-5047-417d-b1ba-517bdaea8bf1 is in state STARTED 2025-05-31 20:53:04.207182 | orchestrator | 2025-05-31 20:53:04 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:53:07.249989 | orchestrator | 2025-05-31 20:53:07 | INFO  | Task de780906-fd56-4662-a3b4-6e7bcd0c6c91 is in state STARTED 2025-05-31 20:53:07.252220 | orchestrator | 2025-05-31 20:53:07 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:53:07.259471 | orchestrator | 2025-05-31 20:53:07 | INFO  | Task 9e7110b6-7168-4ac4-a78a-010bf834459f is in state STARTED 2025-05-31 20:53:07.268223 | orchestrator | 2025-05-31 20:53:07 | INFO  | Task 71c06ffc-5872-4e70-bcb8-0731f25cc20a is in state STARTED 2025-05-31 20:53:07.273294 | orchestrator | 2025-05-31 20:53:07 | INFO  | Task 66d4f3d3-efcd-4308-b898-04305870110d is in state STARTED 2025-05-31 20:53:07.278747 | orchestrator | 2025-05-31 20:53:07 | INFO  | Task 548ac3b1-5047-417d-b1ba-517bdaea8bf1 is in state STARTED 2025-05-31 20:53:07.278816 | orchestrator | 2025-05-31 20:53:07 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:53:10.337294 | orchestrator | 2025-05-31 20:53:10 | INFO  | Task de780906-fd56-4662-a3b4-6e7bcd0c6c91 is in state STARTED 2025-05-31 20:53:10.337450 | orchestrator | 2025-05-31 20:53:10 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:53:10.337622 | orchestrator | 2025-05-31 20:53:10 | INFO  | Task 9e7110b6-7168-4ac4-a78a-010bf834459f is in state STARTED 2025-05-31 20:53:10.337689 | orchestrator | 2025-05-31 20:53:10 | INFO  | Task 71c06ffc-5872-4e70-bcb8-0731f25cc20a is in state STARTED 2025-05-31 20:53:10.338629 | orchestrator | 2025-05-31 20:53:10 | INFO  | Task 66d4f3d3-efcd-4308-b898-04305870110d is in state STARTED 2025-05-31 20:53:10.342456 | orchestrator | 2025-05-31 20:53:10 | INFO  | Task 548ac3b1-5047-417d-b1ba-517bdaea8bf1 is in state STARTED 2025-05-31 20:53:10.342502 | orchestrator | 2025-05-31 20:53:10 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:53:13.411897 | orchestrator | 2025-05-31 20:53:13 | INFO  | Task de780906-fd56-4662-a3b4-6e7bcd0c6c91 is in state STARTED 2025-05-31 20:53:13.411989 | orchestrator | 2025-05-31 20:53:13 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:53:13.414636 | orchestrator | 2025-05-31 20:53:13 | INFO  | Task 9e7110b6-7168-4ac4-a78a-010bf834459f is in state STARTED 2025-05-31 20:53:13.415186 | orchestrator | 2025-05-31 20:53:13 | INFO  | Task 71c06ffc-5872-4e70-bcb8-0731f25cc20a is in state STARTED 2025-05-31 20:53:13.416065 | orchestrator | 2025-05-31 20:53:13 | INFO  | Task 66d4f3d3-efcd-4308-b898-04305870110d is in state STARTED 2025-05-31 20:53:13.416784 | orchestrator | 2025-05-31 20:53:13 | INFO  | Task 548ac3b1-5047-417d-b1ba-517bdaea8bf1 is in state STARTED 2025-05-31 20:53:13.416921 | orchestrator | 2025-05-31 20:53:13 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:53:16.467603 | orchestrator | 2025-05-31 20:53:16 | INFO  | Task de780906-fd56-4662-a3b4-6e7bcd0c6c91 is in state STARTED 2025-05-31 20:53:16.467712 | orchestrator | 2025-05-31 20:53:16 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:53:16.468392 | orchestrator | 2025-05-31 20:53:16 | INFO  | Task 9e7110b6-7168-4ac4-a78a-010bf834459f is in state STARTED 2025-05-31 20:53:16.471311 | orchestrator | 2025-05-31 20:53:16 | INFO  | Task 71c06ffc-5872-4e70-bcb8-0731f25cc20a is in state STARTED 2025-05-31 20:53:16.474191 | orchestrator | 2025-05-31 20:53:16 | INFO  | Task 66d4f3d3-efcd-4308-b898-04305870110d is in state SUCCESS 2025-05-31 20:53:16.477064 | orchestrator | 2025-05-31 20:53:16 | INFO  | Task 548ac3b1-5047-417d-b1ba-517bdaea8bf1 is in state STARTED 2025-05-31 20:53:16.477118 | orchestrator | 2025-05-31 20:53:16 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:53:19.532223 | orchestrator | 2025-05-31 20:53:19 | INFO  | Task de780906-fd56-4662-a3b4-6e7bcd0c6c91 is in state STARTED 2025-05-31 20:53:19.535647 | orchestrator | 2025-05-31 20:53:19 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:53:19.538603 | orchestrator | 2025-05-31 20:53:19 | INFO  | Task 9e7110b6-7168-4ac4-a78a-010bf834459f is in state STARTED 2025-05-31 20:53:19.538942 | orchestrator | 2025-05-31 20:53:19 | INFO  | Task 71c06ffc-5872-4e70-bcb8-0731f25cc20a is in state STARTED 2025-05-31 20:53:19.540384 | orchestrator | 2025-05-31 20:53:19 | INFO  | Task 548ac3b1-5047-417d-b1ba-517bdaea8bf1 is in state STARTED 2025-05-31 20:53:19.540410 | orchestrator | 2025-05-31 20:53:19 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:53:22.585549 | orchestrator | 2025-05-31 20:53:22 | INFO  | Task de780906-fd56-4662-a3b4-6e7bcd0c6c91 is in state STARTED 2025-05-31 20:53:22.586290 | orchestrator | 2025-05-31 20:53:22 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:53:22.587506 | orchestrator | 2025-05-31 20:53:22 | INFO  | Task 9e7110b6-7168-4ac4-a78a-010bf834459f is in state STARTED 2025-05-31 20:53:22.590345 | orchestrator | 2025-05-31 20:53:22 | INFO  | Task 71c06ffc-5872-4e70-bcb8-0731f25cc20a is in state STARTED 2025-05-31 20:53:22.590547 | orchestrator | 2025-05-31 20:53:22 | INFO  | Task 548ac3b1-5047-417d-b1ba-517bdaea8bf1 is in state STARTED 2025-05-31 20:53:22.591148 | orchestrator | 2025-05-31 20:53:22 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:53:25.628138 | orchestrator | 2025-05-31 20:53:25 | INFO  | Task de780906-fd56-4662-a3b4-6e7bcd0c6c91 is in state STARTED 2025-05-31 20:53:25.629544 | orchestrator | 2025-05-31 20:53:25 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:53:25.631037 | orchestrator | 2025-05-31 20:53:25 | INFO  | Task 9e7110b6-7168-4ac4-a78a-010bf834459f is in state STARTED 2025-05-31 20:53:25.632586 | orchestrator | 2025-05-31 20:53:25 | INFO  | Task 71c06ffc-5872-4e70-bcb8-0731f25cc20a is in state STARTED 2025-05-31 20:53:25.633234 | orchestrator | 2025-05-31 20:53:25 | INFO  | Task 548ac3b1-5047-417d-b1ba-517bdaea8bf1 is in state STARTED 2025-05-31 20:53:25.633248 | orchestrator | 2025-05-31 20:53:25 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:53:28.676847 | orchestrator | 2025-05-31 20:53:28 | INFO  | Task de780906-fd56-4662-a3b4-6e7bcd0c6c91 is in state STARTED 2025-05-31 20:53:28.677543 | orchestrator | 2025-05-31 20:53:28 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:53:28.678498 | orchestrator | 2025-05-31 20:53:28 | INFO  | Task 9e7110b6-7168-4ac4-a78a-010bf834459f is in state STARTED 2025-05-31 20:53:28.680914 | orchestrator | 2025-05-31 20:53:28 | INFO  | Task 71c06ffc-5872-4e70-bcb8-0731f25cc20a is in state STARTED 2025-05-31 20:53:28.681007 | orchestrator | 2025-05-31 20:53:28 | INFO  | Task 548ac3b1-5047-417d-b1ba-517bdaea8bf1 is in state STARTED 2025-05-31 20:53:28.681033 | orchestrator | 2025-05-31 20:53:28 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:53:31.721893 | orchestrator | 2025-05-31 20:53:31.721975 | orchestrator | 2025-05-31 20:53:31.721986 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-05-31 20:53:31.721994 | orchestrator | 2025-05-31 20:53:31.722002 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-05-31 20:53:31.722011 | orchestrator | Saturday 31 May 2025 20:52:28 +0000 (0:00:01.000) 0:00:01.000 ********** 2025-05-31 20:53:31.722083 | orchestrator | ok: [testbed-manager] => { 2025-05-31 20:53:31.722092 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-05-31 20:53:31.722101 | orchestrator | } 2025-05-31 20:53:31.722108 | orchestrator | 2025-05-31 20:53:31.722114 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-05-31 20:53:31.722121 | orchestrator | Saturday 31 May 2025 20:52:29 +0000 (0:00:00.402) 0:00:01.402 ********** 2025-05-31 20:53:31.722129 | orchestrator | ok: [testbed-manager] 2025-05-31 20:53:31.722137 | orchestrator | 2025-05-31 20:53:31.722144 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-05-31 20:53:31.722150 | orchestrator | Saturday 31 May 2025 20:52:30 +0000 (0:00:01.145) 0:00:02.547 ********** 2025-05-31 20:53:31.722157 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-05-31 20:53:31.722164 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-05-31 20:53:31.722172 | orchestrator | 2025-05-31 20:53:31.722179 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-05-31 20:53:31.722187 | orchestrator | Saturday 31 May 2025 20:52:31 +0000 (0:00:01.435) 0:00:03.983 ********** 2025-05-31 20:53:31.722196 | orchestrator | changed: [testbed-manager] 2025-05-31 20:53:31.722203 | orchestrator | 2025-05-31 20:53:31.722211 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-05-31 20:53:31.722218 | orchestrator | Saturday 31 May 2025 20:52:34 +0000 (0:00:02.406) 0:00:06.389 ********** 2025-05-31 20:53:31.722225 | orchestrator | changed: [testbed-manager] 2025-05-31 20:53:31.722233 | orchestrator | 2025-05-31 20:53:31.722240 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-05-31 20:53:31.722248 | orchestrator | Saturday 31 May 2025 20:52:36 +0000 (0:00:02.034) 0:00:08.423 ********** 2025-05-31 20:53:31.722256 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-05-31 20:53:31.722264 | orchestrator | ok: [testbed-manager] 2025-05-31 20:53:31.722271 | orchestrator | 2025-05-31 20:53:31.722279 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-05-31 20:53:31.722293 | orchestrator | Saturday 31 May 2025 20:53:00 +0000 (0:00:24.418) 0:00:32.841 ********** 2025-05-31 20:53:31.722301 | orchestrator | changed: [testbed-manager] 2025-05-31 20:53:31.722309 | orchestrator | 2025-05-31 20:53:31.722333 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 20:53:31.722342 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 20:53:31.722351 | orchestrator | 2025-05-31 20:53:31.722359 | orchestrator | 2025-05-31 20:53:31.722366 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-31 20:53:31.722399 | orchestrator | Saturday 31 May 2025 20:53:02 +0000 (0:00:02.063) 0:00:34.905 ********** 2025-05-31 20:53:31.722406 | orchestrator | =============================================================================== 2025-05-31 20:53:31.722411 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 24.42s 2025-05-31 20:53:31.722418 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.41s 2025-05-31 20:53:31.722424 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 2.06s 2025-05-31 20:53:31.722429 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 2.03s 2025-05-31 20:53:31.722435 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.44s 2025-05-31 20:53:31.722442 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.15s 2025-05-31 20:53:31.722447 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.40s 2025-05-31 20:53:31.722455 | orchestrator | 2025-05-31 20:53:31.722462 | orchestrator | 2025-05-31 20:53:31.722470 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-05-31 20:53:31.722478 | orchestrator | 2025-05-31 20:53:31.722487 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-05-31 20:53:31.722494 | orchestrator | Saturday 31 May 2025 20:52:27 +0000 (0:00:00.487) 0:00:00.487 ********** 2025-05-31 20:53:31.722504 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-05-31 20:53:31.722513 | orchestrator | 2025-05-31 20:53:31.722522 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-05-31 20:53:31.722530 | orchestrator | Saturday 31 May 2025 20:52:28 +0000 (0:00:00.443) 0:00:00.931 ********** 2025-05-31 20:53:31.722538 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-05-31 20:53:31.722546 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-05-31 20:53:31.722555 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-05-31 20:53:31.722564 | orchestrator | 2025-05-31 20:53:31.722574 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-05-31 20:53:31.722584 | orchestrator | Saturday 31 May 2025 20:52:30 +0000 (0:00:02.080) 0:00:03.011 ********** 2025-05-31 20:53:31.722594 | orchestrator | changed: [testbed-manager] 2025-05-31 20:53:31.722605 | orchestrator | 2025-05-31 20:53:31.722614 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-05-31 20:53:31.722622 | orchestrator | Saturday 31 May 2025 20:52:33 +0000 (0:00:02.695) 0:00:05.706 ********** 2025-05-31 20:53:31.722649 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-05-31 20:53:31.722661 | orchestrator | ok: [testbed-manager] 2025-05-31 20:53:31.722672 | orchestrator | 2025-05-31 20:53:31.722681 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-05-31 20:53:31.722692 | orchestrator | Saturday 31 May 2025 20:53:08 +0000 (0:00:35.212) 0:00:40.919 ********** 2025-05-31 20:53:31.722702 | orchestrator | changed: [testbed-manager] 2025-05-31 20:53:31.722712 | orchestrator | 2025-05-31 20:53:31.722723 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-05-31 20:53:31.722732 | orchestrator | Saturday 31 May 2025 20:53:09 +0000 (0:00:00.792) 0:00:41.711 ********** 2025-05-31 20:53:31.722743 | orchestrator | ok: [testbed-manager] 2025-05-31 20:53:31.722753 | orchestrator | 2025-05-31 20:53:31.722763 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-05-31 20:53:31.722781 | orchestrator | Saturday 31 May 2025 20:53:09 +0000 (0:00:00.883) 0:00:42.594 ********** 2025-05-31 20:53:31.722789 | orchestrator | changed: [testbed-manager] 2025-05-31 20:53:31.722797 | orchestrator | 2025-05-31 20:53:31.722804 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-05-31 20:53:31.722813 | orchestrator | Saturday 31 May 2025 20:53:12 +0000 (0:00:02.150) 0:00:44.744 ********** 2025-05-31 20:53:31.722821 | orchestrator | changed: [testbed-manager] 2025-05-31 20:53:31.722828 | orchestrator | 2025-05-31 20:53:31.722835 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-05-31 20:53:31.722842 | orchestrator | Saturday 31 May 2025 20:53:13 +0000 (0:00:01.286) 0:00:46.031 ********** 2025-05-31 20:53:31.722849 | orchestrator | changed: [testbed-manager] 2025-05-31 20:53:31.722856 | orchestrator | 2025-05-31 20:53:31.722863 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-05-31 20:53:31.722870 | orchestrator | Saturday 31 May 2025 20:53:14 +0000 (0:00:00.758) 0:00:46.789 ********** 2025-05-31 20:53:31.722877 | orchestrator | ok: [testbed-manager] 2025-05-31 20:53:31.722884 | orchestrator | 2025-05-31 20:53:31.722891 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 20:53:31.722898 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 20:53:31.722906 | orchestrator | 2025-05-31 20:53:31.722913 | orchestrator | 2025-05-31 20:53:31.722920 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-31 20:53:31.722927 | orchestrator | Saturday 31 May 2025 20:53:14 +0000 (0:00:00.363) 0:00:47.152 ********** 2025-05-31 20:53:31.722938 | orchestrator | =============================================================================== 2025-05-31 20:53:31.722945 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 35.21s 2025-05-31 20:53:31.722951 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.70s 2025-05-31 20:53:31.722957 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 2.15s 2025-05-31 20:53:31.722963 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.08s 2025-05-31 20:53:31.722969 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.29s 2025-05-31 20:53:31.722975 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.88s 2025-05-31 20:53:31.722981 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 0.79s 2025-05-31 20:53:31.722988 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.76s 2025-05-31 20:53:31.722994 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.44s 2025-05-31 20:53:31.723000 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.36s 2025-05-31 20:53:31.723006 | orchestrator | 2025-05-31 20:53:31.723011 | orchestrator | 2025-05-31 20:53:31.723017 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-31 20:53:31.723022 | orchestrator | 2025-05-31 20:53:31.723028 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-31 20:53:31.723034 | orchestrator | Saturday 31 May 2025 20:52:27 +0000 (0:00:00.251) 0:00:00.251 ********** 2025-05-31 20:53:31.723039 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-05-31 20:53:31.723045 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-05-31 20:53:31.723051 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-05-31 20:53:31.723057 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-05-31 20:53:31.723064 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-05-31 20:53:31.723070 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-05-31 20:53:31.723077 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-05-31 20:53:31.723090 | orchestrator | 2025-05-31 20:53:31.723096 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-05-31 20:53:31.723103 | orchestrator | 2025-05-31 20:53:31.723110 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-05-31 20:53:31.723116 | orchestrator | Saturday 31 May 2025 20:52:29 +0000 (0:00:02.000) 0:00:02.252 ********** 2025-05-31 20:53:31.723132 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 20:53:31.723140 | orchestrator | 2025-05-31 20:53:31.723146 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-05-31 20:53:31.723151 | orchestrator | Saturday 31 May 2025 20:52:31 +0000 (0:00:02.709) 0:00:04.961 ********** 2025-05-31 20:53:31.723158 | orchestrator | ok: [testbed-manager] 2025-05-31 20:53:31.723164 | orchestrator | ok: [testbed-node-0] 2025-05-31 20:53:31.723171 | orchestrator | ok: [testbed-node-1] 2025-05-31 20:53:31.723177 | orchestrator | ok: [testbed-node-2] 2025-05-31 20:53:31.723184 | orchestrator | ok: [testbed-node-3] 2025-05-31 20:53:31.723199 | orchestrator | ok: [testbed-node-4] 2025-05-31 20:53:31.723206 | orchestrator | ok: [testbed-node-5] 2025-05-31 20:53:31.723212 | orchestrator | 2025-05-31 20:53:31.723219 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-05-31 20:53:31.723226 | orchestrator | Saturday 31 May 2025 20:52:34 +0000 (0:00:02.434) 0:00:07.395 ********** 2025-05-31 20:53:31.723233 | orchestrator | ok: [testbed-manager] 2025-05-31 20:53:31.723240 | orchestrator | ok: [testbed-node-1] 2025-05-31 20:53:31.723247 | orchestrator | ok: [testbed-node-2] 2025-05-31 20:53:31.723253 | orchestrator | ok: [testbed-node-0] 2025-05-31 20:53:31.723259 | orchestrator | ok: [testbed-node-3] 2025-05-31 20:53:31.723266 | orchestrator | ok: [testbed-node-4] 2025-05-31 20:53:31.723272 | orchestrator | ok: [testbed-node-5] 2025-05-31 20:53:31.723279 | orchestrator | 2025-05-31 20:53:31.723286 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-05-31 20:53:31.723292 | orchestrator | Saturday 31 May 2025 20:52:37 +0000 (0:00:03.600) 0:00:10.996 ********** 2025-05-31 20:53:31.723299 | orchestrator | changed: [testbed-manager] 2025-05-31 20:53:31.723305 | orchestrator | changed: [testbed-node-0] 2025-05-31 20:53:31.723312 | orchestrator | changed: [testbed-node-1] 2025-05-31 20:53:31.723318 | orchestrator | changed: [testbed-node-2] 2025-05-31 20:53:31.723325 | orchestrator | changed: [testbed-node-3] 2025-05-31 20:53:31.723331 | orchestrator | changed: [testbed-node-4] 2025-05-31 20:53:31.723337 | orchestrator | changed: [testbed-node-5] 2025-05-31 20:53:31.723344 | orchestrator | 2025-05-31 20:53:31.723350 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-05-31 20:53:31.723357 | orchestrator | Saturday 31 May 2025 20:52:40 +0000 (0:00:02.604) 0:00:13.600 ********** 2025-05-31 20:53:31.723363 | orchestrator | changed: [testbed-manager] 2025-05-31 20:53:31.723429 | orchestrator | changed: [testbed-node-1] 2025-05-31 20:53:31.723437 | orchestrator | changed: [testbed-node-3] 2025-05-31 20:53:31.723443 | orchestrator | changed: [testbed-node-2] 2025-05-31 20:53:31.723450 | orchestrator | changed: [testbed-node-4] 2025-05-31 20:53:31.723457 | orchestrator | changed: [testbed-node-0] 2025-05-31 20:53:31.723463 | orchestrator | changed: [testbed-node-5] 2025-05-31 20:53:31.723470 | orchestrator | 2025-05-31 20:53:31.723477 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-05-31 20:53:31.723484 | orchestrator | Saturday 31 May 2025 20:52:52 +0000 (0:00:11.838) 0:00:25.438 ********** 2025-05-31 20:53:31.723491 | orchestrator | changed: [testbed-node-5] 2025-05-31 20:53:31.723497 | orchestrator | changed: [testbed-node-4] 2025-05-31 20:53:31.723504 | orchestrator | changed: [testbed-node-3] 2025-05-31 20:53:31.723511 | orchestrator | changed: [testbed-manager] 2025-05-31 20:53:31.723518 | orchestrator | changed: [testbed-node-2] 2025-05-31 20:53:31.723524 | orchestrator | changed: [testbed-node-1] 2025-05-31 20:53:31.723539 | orchestrator | changed: [testbed-node-0] 2025-05-31 20:53:31.723545 | orchestrator | 2025-05-31 20:53:31.723560 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-05-31 20:53:31.723567 | orchestrator | Saturday 31 May 2025 20:53:08 +0000 (0:00:16.240) 0:00:41.679 ********** 2025-05-31 20:53:31.723574 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 20:53:31.723583 | orchestrator | 2025-05-31 20:53:31.723589 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-05-31 20:53:31.723595 | orchestrator | Saturday 31 May 2025 20:53:10 +0000 (0:00:01.879) 0:00:43.558 ********** 2025-05-31 20:53:31.723602 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-05-31 20:53:31.723608 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-05-31 20:53:31.723615 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-05-31 20:53:31.723622 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-05-31 20:53:31.723629 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-05-31 20:53:31.723635 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-05-31 20:53:31.723641 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-05-31 20:53:31.723648 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-05-31 20:53:31.723654 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-05-31 20:53:31.723661 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-05-31 20:53:31.723667 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-05-31 20:53:31.723674 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-05-31 20:53:31.723680 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-05-31 20:53:31.723687 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-05-31 20:53:31.723694 | orchestrator | 2025-05-31 20:53:31.723701 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-05-31 20:53:31.723709 | orchestrator | Saturday 31 May 2025 20:53:15 +0000 (0:00:04.946) 0:00:48.504 ********** 2025-05-31 20:53:31.723716 | orchestrator | ok: [testbed-manager] 2025-05-31 20:53:31.723722 | orchestrator | ok: [testbed-node-0] 2025-05-31 20:53:31.723729 | orchestrator | ok: [testbed-node-1] 2025-05-31 20:53:31.723735 | orchestrator | ok: [testbed-node-2] 2025-05-31 20:53:31.723741 | orchestrator | ok: [testbed-node-3] 2025-05-31 20:53:31.723747 | orchestrator | ok: [testbed-node-4] 2025-05-31 20:53:31.723754 | orchestrator | ok: [testbed-node-5] 2025-05-31 20:53:31.723760 | orchestrator | 2025-05-31 20:53:31.723767 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-05-31 20:53:31.723773 | orchestrator | Saturday 31 May 2025 20:53:16 +0000 (0:00:00.952) 0:00:49.457 ********** 2025-05-31 20:53:31.723780 | orchestrator | changed: [testbed-manager] 2025-05-31 20:53:31.723786 | orchestrator | changed: [testbed-node-0] 2025-05-31 20:53:31.723793 | orchestrator | changed: [testbed-node-1] 2025-05-31 20:53:31.723799 | orchestrator | changed: [testbed-node-2] 2025-05-31 20:53:31.723806 | orchestrator | changed: [testbed-node-3] 2025-05-31 20:53:31.723813 | orchestrator | changed: [testbed-node-4] 2025-05-31 20:53:31.723819 | orchestrator | changed: [testbed-node-5] 2025-05-31 20:53:31.723826 | orchestrator | 2025-05-31 20:53:31.723833 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-05-31 20:53:31.723848 | orchestrator | Saturday 31 May 2025 20:53:18 +0000 (0:00:02.111) 0:00:51.569 ********** 2025-05-31 20:53:31.723854 | orchestrator | ok: [testbed-node-0] 2025-05-31 20:53:31.723861 | orchestrator | ok: [testbed-node-1] 2025-05-31 20:53:31.723867 | orchestrator | ok: [testbed-node-2] 2025-05-31 20:53:31.723874 | orchestrator | ok: [testbed-manager] 2025-05-31 20:53:31.723880 | orchestrator | ok: [testbed-node-3] 2025-05-31 20:53:31.723887 | orchestrator | ok: [testbed-node-4] 2025-05-31 20:53:31.723899 | orchestrator | ok: [testbed-node-5] 2025-05-31 20:53:31.723905 | orchestrator | 2025-05-31 20:53:31.723912 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-05-31 20:53:31.723918 | orchestrator | Saturday 31 May 2025 20:53:19 +0000 (0:00:01.367) 0:00:52.937 ********** 2025-05-31 20:53:31.723925 | orchestrator | ok: [testbed-manager] 2025-05-31 20:53:31.723931 | orchestrator | ok: [testbed-node-1] 2025-05-31 20:53:31.723938 | orchestrator | ok: [testbed-node-2] 2025-05-31 20:53:31.723944 | orchestrator | ok: [testbed-node-3] 2025-05-31 20:53:31.723951 | orchestrator | ok: [testbed-node-0] 2025-05-31 20:53:31.723957 | orchestrator | ok: [testbed-node-4] 2025-05-31 20:53:31.723964 | orchestrator | ok: [testbed-node-5] 2025-05-31 20:53:31.723971 | orchestrator | 2025-05-31 20:53:31.723978 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-05-31 20:53:31.723984 | orchestrator | Saturday 31 May 2025 20:53:22 +0000 (0:00:02.244) 0:00:55.182 ********** 2025-05-31 20:53:31.723991 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-05-31 20:53:31.723999 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 20:53:31.724006 | orchestrator | 2025-05-31 20:53:31.724012 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-05-31 20:53:31.724019 | orchestrator | Saturday 31 May 2025 20:53:23 +0000 (0:00:01.465) 0:00:56.647 ********** 2025-05-31 20:53:31.724026 | orchestrator | changed: [testbed-manager] 2025-05-31 20:53:31.724032 | orchestrator | 2025-05-31 20:53:31.724039 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-05-31 20:53:31.724045 | orchestrator | Saturday 31 May 2025 20:53:25 +0000 (0:00:02.203) 0:00:58.851 ********** 2025-05-31 20:53:31.724052 | orchestrator | changed: [testbed-manager] 2025-05-31 20:53:31.724059 | orchestrator | changed: [testbed-node-3] 2025-05-31 20:53:31.724066 | orchestrator | changed: [testbed-node-0] 2025-05-31 20:53:31.724072 | orchestrator | changed: [testbed-node-2] 2025-05-31 20:53:31.724079 | orchestrator | changed: [testbed-node-4] 2025-05-31 20:53:31.724085 | orchestrator | changed: [testbed-node-5] 2025-05-31 20:53:31.724091 | orchestrator | changed: [testbed-node-1] 2025-05-31 20:53:31.724098 | orchestrator | 2025-05-31 20:53:31.724104 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 20:53:31.724111 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 20:53:31.724143 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 20:53:31.724149 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 20:53:31.724155 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 20:53:31.724162 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 20:53:31.724168 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 20:53:31.724175 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 20:53:31.724181 | orchestrator | 2025-05-31 20:53:31.724188 | orchestrator | 2025-05-31 20:53:31.724194 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-31 20:53:31.724201 | orchestrator | Saturday 31 May 2025 20:53:28 +0000 (0:00:03.084) 0:01:01.935 ********** 2025-05-31 20:53:31.724213 | orchestrator | =============================================================================== 2025-05-31 20:53:31.724220 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 16.24s 2025-05-31 20:53:31.724226 | orchestrator | osism.services.netdata : Add repository -------------------------------- 11.84s 2025-05-31 20:53:31.724233 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 4.95s 2025-05-31 20:53:31.724239 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.60s 2025-05-31 20:53:31.724246 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.08s 2025-05-31 20:53:31.724253 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 2.71s 2025-05-31 20:53:31.724260 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.60s 2025-05-31 20:53:31.724267 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.43s 2025-05-31 20:53:31.724273 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.24s 2025-05-31 20:53:31.724280 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.20s 2025-05-31 20:53:31.724287 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 2.11s 2025-05-31 20:53:31.724299 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.00s 2025-05-31 20:53:31.724306 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.88s 2025-05-31 20:53:31.724313 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.47s 2025-05-31 20:53:31.724319 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.37s 2025-05-31 20:53:31.724325 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 0.95s 2025-05-31 20:53:31.724361 | orchestrator | 2025-05-31 20:53:31 | INFO  | Task de780906-fd56-4662-a3b4-6e7bcd0c6c91 is in state SUCCESS 2025-05-31 20:53:31.724368 | orchestrator | 2025-05-31 20:53:31 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:53:31.724525 | orchestrator | 2025-05-31 20:53:31 | INFO  | Task 9e7110b6-7168-4ac4-a78a-010bf834459f is in state STARTED 2025-05-31 20:53:31.725044 | orchestrator | 2025-05-31 20:53:31 | INFO  | Task 71c06ffc-5872-4e70-bcb8-0731f25cc20a is in state STARTED 2025-05-31 20:53:31.726311 | orchestrator | 2025-05-31 20:53:31 | INFO  | Task 548ac3b1-5047-417d-b1ba-517bdaea8bf1 is in state STARTED 2025-05-31 20:53:31.726448 | orchestrator | 2025-05-31 20:53:31 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:53:34.756759 | orchestrator | 2025-05-31 20:53:34 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:53:34.757130 | orchestrator | 2025-05-31 20:53:34 | INFO  | Task 9e7110b6-7168-4ac4-a78a-010bf834459f is in state STARTED 2025-05-31 20:53:34.759375 | orchestrator | 2025-05-31 20:53:34 | INFO  | Task 71c06ffc-5872-4e70-bcb8-0731f25cc20a is in state STARTED 2025-05-31 20:53:34.761947 | orchestrator | 2025-05-31 20:53:34 | INFO  | Task 548ac3b1-5047-417d-b1ba-517bdaea8bf1 is in state STARTED 2025-05-31 20:53:34.761977 | orchestrator | 2025-05-31 20:53:34 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:53:37.814232 | orchestrator | 2025-05-31 20:53:37 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:53:37.814340 | orchestrator | 2025-05-31 20:53:37 | INFO  | Task 9e7110b6-7168-4ac4-a78a-010bf834459f is in state STARTED 2025-05-31 20:53:37.817008 | orchestrator | 2025-05-31 20:53:37 | INFO  | Task 71c06ffc-5872-4e70-bcb8-0731f25cc20a is in state STARTED 2025-05-31 20:53:37.818302 | orchestrator | 2025-05-31 20:53:37 | INFO  | Task 548ac3b1-5047-417d-b1ba-517bdaea8bf1 is in state STARTED 2025-05-31 20:53:37.818358 | orchestrator | 2025-05-31 20:53:37 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:53:40.871309 | orchestrator | 2025-05-31 20:53:40 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:53:40.872864 | orchestrator | 2025-05-31 20:53:40 | INFO  | Task 9e7110b6-7168-4ac4-a78a-010bf834459f is in state STARTED 2025-05-31 20:53:40.876256 | orchestrator | 2025-05-31 20:53:40 | INFO  | Task 71c06ffc-5872-4e70-bcb8-0731f25cc20a is in state STARTED 2025-05-31 20:53:40.876334 | orchestrator | 2025-05-31 20:53:40 | INFO  | Task 548ac3b1-5047-417d-b1ba-517bdaea8bf1 is in state STARTED 2025-05-31 20:53:40.877460 | orchestrator | 2025-05-31 20:53:40 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:53:43.920240 | orchestrator | 2025-05-31 20:53:43 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:53:43.920728 | orchestrator | 2025-05-31 20:53:43 | INFO  | Task 9e7110b6-7168-4ac4-a78a-010bf834459f is in state STARTED 2025-05-31 20:53:43.923487 | orchestrator | 2025-05-31 20:53:43 | INFO  | Task 71c06ffc-5872-4e70-bcb8-0731f25cc20a is in state STARTED 2025-05-31 20:53:43.926699 | orchestrator | 2025-05-31 20:53:43 | INFO  | Task 548ac3b1-5047-417d-b1ba-517bdaea8bf1 is in state STARTED 2025-05-31 20:53:43.926757 | orchestrator | 2025-05-31 20:53:43 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:53:46.973224 | orchestrator | 2025-05-31 20:53:46 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:53:46.977003 | orchestrator | 2025-05-31 20:53:46 | INFO  | Task 9e7110b6-7168-4ac4-a78a-010bf834459f is in state STARTED 2025-05-31 20:53:46.977059 | orchestrator | 2025-05-31 20:53:46 | INFO  | Task 71c06ffc-5872-4e70-bcb8-0731f25cc20a is in state STARTED 2025-05-31 20:53:46.978686 | orchestrator | 2025-05-31 20:53:46 | INFO  | Task 548ac3b1-5047-417d-b1ba-517bdaea8bf1 is in state STARTED 2025-05-31 20:53:46.978713 | orchestrator | 2025-05-31 20:53:46 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:53:50.044963 | orchestrator | 2025-05-31 20:53:50 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:53:50.046882 | orchestrator | 2025-05-31 20:53:50 | INFO  | Task 9e7110b6-7168-4ac4-a78a-010bf834459f is in state STARTED 2025-05-31 20:53:50.048343 | orchestrator | 2025-05-31 20:53:50 | INFO  | Task 71c06ffc-5872-4e70-bcb8-0731f25cc20a is in state STARTED 2025-05-31 20:53:50.051991 | orchestrator | 2025-05-31 20:53:50 | INFO  | Task 548ac3b1-5047-417d-b1ba-517bdaea8bf1 is in state STARTED 2025-05-31 20:53:50.052050 | orchestrator | 2025-05-31 20:53:50 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:53:53.109503 | orchestrator | 2025-05-31 20:53:53 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:53:53.110974 | orchestrator | 2025-05-31 20:53:53 | INFO  | Task 9e7110b6-7168-4ac4-a78a-010bf834459f is in state STARTED 2025-05-31 20:53:53.112418 | orchestrator | 2025-05-31 20:53:53 | INFO  | Task 71c06ffc-5872-4e70-bcb8-0731f25cc20a is in state STARTED 2025-05-31 20:53:53.114061 | orchestrator | 2025-05-31 20:53:53 | INFO  | Task 548ac3b1-5047-417d-b1ba-517bdaea8bf1 is in state STARTED 2025-05-31 20:53:53.114492 | orchestrator | 2025-05-31 20:53:53 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:53:56.173140 | orchestrator | 2025-05-31 20:53:56 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:53:56.175004 | orchestrator | 2025-05-31 20:53:56 | INFO  | Task 9e7110b6-7168-4ac4-a78a-010bf834459f is in state STARTED 2025-05-31 20:53:56.175729 | orchestrator | 2025-05-31 20:53:56 | INFO  | Task 71c06ffc-5872-4e70-bcb8-0731f25cc20a is in state STARTED 2025-05-31 20:53:56.177246 | orchestrator | 2025-05-31 20:53:56 | INFO  | Task 548ac3b1-5047-417d-b1ba-517bdaea8bf1 is in state STARTED 2025-05-31 20:53:56.177282 | orchestrator | 2025-05-31 20:53:56 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:53:59.221651 | orchestrator | 2025-05-31 20:53:59 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:53:59.224354 | orchestrator | 2025-05-31 20:53:59 | INFO  | Task 9e7110b6-7168-4ac4-a78a-010bf834459f is in state STARTED 2025-05-31 20:53:59.230232 | orchestrator | 2025-05-31 20:53:59 | INFO  | Task 71c06ffc-5872-4e70-bcb8-0731f25cc20a is in state STARTED 2025-05-31 20:53:59.238343 | orchestrator | 2025-05-31 20:53:59 | INFO  | Task 548ac3b1-5047-417d-b1ba-517bdaea8bf1 is in state STARTED 2025-05-31 20:53:59.238410 | orchestrator | 2025-05-31 20:53:59 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:54:02.290545 | orchestrator | 2025-05-31 20:54:02 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:54:02.291980 | orchestrator | 2025-05-31 20:54:02 | INFO  | Task 9e7110b6-7168-4ac4-a78a-010bf834459f is in state STARTED 2025-05-31 20:54:02.292947 | orchestrator | 2025-05-31 20:54:02 | INFO  | Task 71c06ffc-5872-4e70-bcb8-0731f25cc20a is in state STARTED 2025-05-31 20:54:02.296511 | orchestrator | 2025-05-31 20:54:02 | INFO  | Task 548ac3b1-5047-417d-b1ba-517bdaea8bf1 is in state STARTED 2025-05-31 20:54:02.296556 | orchestrator | 2025-05-31 20:54:02 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:54:05.351362 | orchestrator | 2025-05-31 20:54:05 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:54:05.355353 | orchestrator | 2025-05-31 20:54:05 | INFO  | Task 9e7110b6-7168-4ac4-a78a-010bf834459f is in state STARTED 2025-05-31 20:54:05.357320 | orchestrator | 2025-05-31 20:54:05 | INFO  | Task 71c06ffc-5872-4e70-bcb8-0731f25cc20a is in state STARTED 2025-05-31 20:54:05.359964 | orchestrator | 2025-05-31 20:54:05 | INFO  | Task 548ac3b1-5047-417d-b1ba-517bdaea8bf1 is in state STARTED 2025-05-31 20:54:05.360009 | orchestrator | 2025-05-31 20:54:05 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:54:08.411030 | orchestrator | 2025-05-31 20:54:08 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:54:08.412569 | orchestrator | 2025-05-31 20:54:08 | INFO  | Task 9e7110b6-7168-4ac4-a78a-010bf834459f is in state STARTED 2025-05-31 20:54:08.415356 | orchestrator | 2025-05-31 20:54:08 | INFO  | Task 71c06ffc-5872-4e70-bcb8-0731f25cc20a is in state STARTED 2025-05-31 20:54:08.417757 | orchestrator | 2025-05-31 20:54:08 | INFO  | Task 548ac3b1-5047-417d-b1ba-517bdaea8bf1 is in state STARTED 2025-05-31 20:54:08.418135 | orchestrator | 2025-05-31 20:54:08 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:54:11.454438 | orchestrator | 2025-05-31 20:54:11 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:54:11.455093 | orchestrator | 2025-05-31 20:54:11 | INFO  | Task 9e7110b6-7168-4ac4-a78a-010bf834459f is in state STARTED 2025-05-31 20:54:11.456899 | orchestrator | 2025-05-31 20:54:11 | INFO  | Task 71c06ffc-5872-4e70-bcb8-0731f25cc20a is in state STARTED 2025-05-31 20:54:11.457976 | orchestrator | 2025-05-31 20:54:11 | INFO  | Task 548ac3b1-5047-417d-b1ba-517bdaea8bf1 is in state STARTED 2025-05-31 20:54:11.458152 | orchestrator | 2025-05-31 20:54:11 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:54:14.512421 | orchestrator | 2025-05-31 20:54:14 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:54:14.512583 | orchestrator | 2025-05-31 20:54:14 | INFO  | Task 9e7110b6-7168-4ac4-a78a-010bf834459f is in state STARTED 2025-05-31 20:54:14.513403 | orchestrator | 2025-05-31 20:54:14 | INFO  | Task 71c06ffc-5872-4e70-bcb8-0731f25cc20a is in state STARTED 2025-05-31 20:54:14.514271 | orchestrator | 2025-05-31 20:54:14 | INFO  | Task 548ac3b1-5047-417d-b1ba-517bdaea8bf1 is in state STARTED 2025-05-31 20:54:14.514301 | orchestrator | 2025-05-31 20:54:14 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:54:17.558369 | orchestrator | 2025-05-31 20:54:17 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:54:17.559804 | orchestrator | 2025-05-31 20:54:17 | INFO  | Task 9e7110b6-7168-4ac4-a78a-010bf834459f is in state SUCCESS 2025-05-31 20:54:17.561932 | orchestrator | 2025-05-31 20:54:17 | INFO  | Task 71c06ffc-5872-4e70-bcb8-0731f25cc20a is in state STARTED 2025-05-31 20:54:17.563037 | orchestrator | 2025-05-31 20:54:17 | INFO  | Task 548ac3b1-5047-417d-b1ba-517bdaea8bf1 is in state STARTED 2025-05-31 20:54:17.563061 | orchestrator | 2025-05-31 20:54:17 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:54:20.608644 | orchestrator | 2025-05-31 20:54:20 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:54:20.610465 | orchestrator | 2025-05-31 20:54:20 | INFO  | Task 71c06ffc-5872-4e70-bcb8-0731f25cc20a is in state STARTED 2025-05-31 20:54:20.612793 | orchestrator | 2025-05-31 20:54:20 | INFO  | Task 548ac3b1-5047-417d-b1ba-517bdaea8bf1 is in state STARTED 2025-05-31 20:54:20.612807 | orchestrator | 2025-05-31 20:54:20 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:54:23.652747 | orchestrator | 2025-05-31 20:54:23 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:54:23.654483 | orchestrator | 2025-05-31 20:54:23 | INFO  | Task 71c06ffc-5872-4e70-bcb8-0731f25cc20a is in state STARTED 2025-05-31 20:54:23.657698 | orchestrator | 2025-05-31 20:54:23 | INFO  | Task 548ac3b1-5047-417d-b1ba-517bdaea8bf1 is in state STARTED 2025-05-31 20:54:23.657749 | orchestrator | 2025-05-31 20:54:23 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:54:26.692776 | orchestrator | 2025-05-31 20:54:26 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:54:26.693650 | orchestrator | 2025-05-31 20:54:26 | INFO  | Task 71c06ffc-5872-4e70-bcb8-0731f25cc20a is in state STARTED 2025-05-31 20:54:26.694237 | orchestrator | 2025-05-31 20:54:26 | INFO  | Task 548ac3b1-5047-417d-b1ba-517bdaea8bf1 is in state STARTED 2025-05-31 20:54:26.694271 | orchestrator | 2025-05-31 20:54:26 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:54:29.738821 | orchestrator | 2025-05-31 20:54:29 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:54:29.741030 | orchestrator | 2025-05-31 20:54:29 | INFO  | Task 71c06ffc-5872-4e70-bcb8-0731f25cc20a is in state STARTED 2025-05-31 20:54:29.741087 | orchestrator | 2025-05-31 20:54:29 | INFO  | Task 548ac3b1-5047-417d-b1ba-517bdaea8bf1 is in state STARTED 2025-05-31 20:54:29.741101 | orchestrator | 2025-05-31 20:54:29 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:54:32.779656 | orchestrator | 2025-05-31 20:54:32 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:54:32.780026 | orchestrator | 2025-05-31 20:54:32 | INFO  | Task 71c06ffc-5872-4e70-bcb8-0731f25cc20a is in state STARTED 2025-05-31 20:54:32.781382 | orchestrator | 2025-05-31 20:54:32 | INFO  | Task 548ac3b1-5047-417d-b1ba-517bdaea8bf1 is in state STARTED 2025-05-31 20:54:32.781413 | orchestrator | 2025-05-31 20:54:32 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:54:35.842917 | orchestrator | 2025-05-31 20:54:35 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:54:35.845088 | orchestrator | 2025-05-31 20:54:35 | INFO  | Task 71c06ffc-5872-4e70-bcb8-0731f25cc20a is in state STARTED 2025-05-31 20:54:35.846470 | orchestrator | 2025-05-31 20:54:35 | INFO  | Task 548ac3b1-5047-417d-b1ba-517bdaea8bf1 is in state STARTED 2025-05-31 20:54:35.846571 | orchestrator | 2025-05-31 20:54:35 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:54:38.898296 | orchestrator | 2025-05-31 20:54:38 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:54:38.899926 | orchestrator | 2025-05-31 20:54:38 | INFO  | Task 71c06ffc-5872-4e70-bcb8-0731f25cc20a is in state STARTED 2025-05-31 20:54:38.901814 | orchestrator | 2025-05-31 20:54:38 | INFO  | Task 548ac3b1-5047-417d-b1ba-517bdaea8bf1 is in state STARTED 2025-05-31 20:54:38.901857 | orchestrator | 2025-05-31 20:54:38 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:54:41.956101 | orchestrator | 2025-05-31 20:54:41 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:54:41.956770 | orchestrator | 2025-05-31 20:54:41 | INFO  | Task 71c06ffc-5872-4e70-bcb8-0731f25cc20a is in state STARTED 2025-05-31 20:54:41.961965 | orchestrator | 2025-05-31 20:54:41 | INFO  | Task 548ac3b1-5047-417d-b1ba-517bdaea8bf1 is in state STARTED 2025-05-31 20:54:41.962164 | orchestrator | 2025-05-31 20:54:41 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:54:44.997313 | orchestrator | 2025-05-31 20:54:44 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:54:44.997845 | orchestrator | 2025-05-31 20:54:44 | INFO  | Task 71c06ffc-5872-4e70-bcb8-0731f25cc20a is in state STARTED 2025-05-31 20:54:44.999800 | orchestrator | 2025-05-31 20:54:44 | INFO  | Task 548ac3b1-5047-417d-b1ba-517bdaea8bf1 is in state STARTED 2025-05-31 20:54:45.000080 | orchestrator | 2025-05-31 20:54:44 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:54:48.048959 | orchestrator | 2025-05-31 20:54:48 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:54:48.049937 | orchestrator | 2025-05-31 20:54:48 | INFO  | Task 71c06ffc-5872-4e70-bcb8-0731f25cc20a is in state STARTED 2025-05-31 20:54:48.050178 | orchestrator | 2025-05-31 20:54:48 | INFO  | Task 548ac3b1-5047-417d-b1ba-517bdaea8bf1 is in state STARTED 2025-05-31 20:54:48.050319 | orchestrator | 2025-05-31 20:54:48 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:54:51.124758 | orchestrator | 2025-05-31 20:54:51 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:54:51.131763 | orchestrator | 2025-05-31 20:54:51 | INFO  | Task 71c06ffc-5872-4e70-bcb8-0731f25cc20a is in state SUCCESS 2025-05-31 20:54:51.135338 | orchestrator | 2025-05-31 20:54:51.135380 | orchestrator | 2025-05-31 20:54:51.135393 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-05-31 20:54:51.135406 | orchestrator | 2025-05-31 20:54:51.135417 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-05-31 20:54:51.135428 | orchestrator | Saturday 31 May 2025 20:52:49 +0000 (0:00:00.211) 0:00:00.211 ********** 2025-05-31 20:54:51.135440 | orchestrator | ok: [testbed-manager] 2025-05-31 20:54:51.135453 | orchestrator | 2025-05-31 20:54:51.135464 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-05-31 20:54:51.135476 | orchestrator | Saturday 31 May 2025 20:52:49 +0000 (0:00:00.795) 0:00:01.006 ********** 2025-05-31 20:54:51.135487 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-05-31 20:54:51.135498 | orchestrator | 2025-05-31 20:54:51.135508 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-05-31 20:54:51.135545 | orchestrator | Saturday 31 May 2025 20:52:50 +0000 (0:00:00.599) 0:00:01.606 ********** 2025-05-31 20:54:51.135579 | orchestrator | changed: [testbed-manager] 2025-05-31 20:54:51.135591 | orchestrator | 2025-05-31 20:54:51.135602 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-05-31 20:54:51.135613 | orchestrator | Saturday 31 May 2025 20:52:51 +0000 (0:00:01.396) 0:00:03.003 ********** 2025-05-31 20:54:51.135624 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-05-31 20:54:51.135634 | orchestrator | ok: [testbed-manager] 2025-05-31 20:54:51.135645 | orchestrator | 2025-05-31 20:54:51.135656 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-05-31 20:54:51.135667 | orchestrator | Saturday 31 May 2025 20:54:01 +0000 (0:01:09.663) 0:01:12.666 ********** 2025-05-31 20:54:51.135677 | orchestrator | changed: [testbed-manager] 2025-05-31 20:54:51.135688 | orchestrator | 2025-05-31 20:54:51.135699 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 20:54:51.135710 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 20:54:51.135723 | orchestrator | 2025-05-31 20:54:51.135734 | orchestrator | 2025-05-31 20:54:51.135745 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-31 20:54:51.135755 | orchestrator | Saturday 31 May 2025 20:54:15 +0000 (0:00:14.028) 0:01:26.695 ********** 2025-05-31 20:54:51.135766 | orchestrator | =============================================================================== 2025-05-31 20:54:51.135776 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 69.66s 2025-05-31 20:54:51.135787 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ----------------- 14.03s 2025-05-31 20:54:51.135797 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.40s 2025-05-31 20:54:51.135808 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 0.80s 2025-05-31 20:54:51.135819 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.60s 2025-05-31 20:54:51.135829 | orchestrator | 2025-05-31 20:54:51.135840 | orchestrator | 2025-05-31 20:54:51.135851 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-05-31 20:54:51.135862 | orchestrator | 2025-05-31 20:54:51.135873 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-05-31 20:54:51.135884 | orchestrator | Saturday 31 May 2025 20:52:21 +0000 (0:00:00.257) 0:00:00.257 ********** 2025-05-31 20:54:51.135895 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 20:54:51.135908 | orchestrator | 2025-05-31 20:54:51.135919 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-05-31 20:54:51.135930 | orchestrator | Saturday 31 May 2025 20:52:22 +0000 (0:00:01.119) 0:00:01.376 ********** 2025-05-31 20:54:51.135941 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-31 20:54:51.135954 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-31 20:54:51.135965 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-31 20:54:51.135977 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-31 20:54:51.135988 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-31 20:54:51.136237 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-31 20:54:51.136265 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-31 20:54:51.136286 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-31 20:54:51.136307 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-31 20:54:51.136343 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-31 20:54:51.136355 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-31 20:54:51.136376 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-31 20:54:51.136387 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-31 20:54:51.136398 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-31 20:54:51.136408 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-31 20:54:51.136419 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-31 20:54:51.136446 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-31 20:54:51.136457 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-31 20:54:51.136467 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-31 20:54:51.136478 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-31 20:54:51.136489 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-31 20:54:51.136500 | orchestrator | 2025-05-31 20:54:51.136510 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-05-31 20:54:51.136521 | orchestrator | Saturday 31 May 2025 20:52:27 +0000 (0:00:04.380) 0:00:05.756 ********** 2025-05-31 20:54:51.136532 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 20:54:51.136545 | orchestrator | 2025-05-31 20:54:51.136591 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-05-31 20:54:51.136604 | orchestrator | Saturday 31 May 2025 20:52:28 +0000 (0:00:01.328) 0:00:07.085 ********** 2025-05-31 20:54:51.136620 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-31 20:54:51.136637 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-31 20:54:51.136649 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-31 20:54:51.136660 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-31 20:54:51.136679 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-31 20:54:51.136699 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:54:51.136712 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:54:51.136724 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:54:51.136743 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:54:51.136755 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-31 20:54:51.136766 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-31 20:54:51.136790 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:54:51.136808 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:54:51.136849 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:54:51.136861 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:54:51.136873 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:54:51.136884 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:54:51.136896 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:54:51.136914 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:54:51.136925 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:54:51.136942 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:54:51.136953 | orchestrator | 2025-05-31 20:54:51.136964 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-05-31 20:54:51.136975 | orchestrator | Saturday 31 May 2025 20:52:33 +0000 (0:00:05.504) 0:00:12.589 ********** 2025-05-31 20:54:51.136993 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-31 20:54:51.137005 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 20:54:51.137017 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 20:54:51.137028 | orchestrator | skipping: [testbed-manager] 2025-05-31 20:54:51.137039 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-31 20:54:51.137051 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 20:54:51.137069 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 20:54:51.137080 | orchestrator | skipping: [testbed-node-0] 2025-05-31 20:54:51.137091 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-31 20:54:51.137112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 20:54:51.137158 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 20:54:51.137182 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-31 20:54:51.137200 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 20:54:51.137243 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 20:54:51.137274 | orchestrator | skipping: [testbed-node-1] 2025-05-31 20:54:51.137292 | orchestrator | skipping: [testbed-node-2] 2025-05-31 20:54:51.137311 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-31 20:54:51.137330 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 20:54:51.137351 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 20:54:51.137369 | orchestrator | skipping: [testbed-node-3] 2025-05-31 20:54:51.137438 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-31 20:54:51.137463 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 20:54:51.137475 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 20:54:51.137487 | orchestrator | skipping: [testbed-node-4] 2025-05-31 20:54:51.137498 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-31 20:54:51.137519 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 20:54:51.137530 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 20:54:51.137541 | orchestrator | skipping: [testbed-node-5] 2025-05-31 20:54:51.137573 | orchestrator | 2025-05-31 20:54:51.137585 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-05-31 20:54:51.137596 | orchestrator | Saturday 31 May 2025 20:52:35 +0000 (0:00:01.803) 0:00:14.393 ********** 2025-05-31 20:54:51.137607 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-31 20:54:51.137624 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-31 20:54:51.137652 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 20:54:51.137664 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 20:54:51.137675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 20:54:51.137693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 20:54:51.137704 | orchestrator | skipping: [testbed-manager] 2025-05-31 20:54:51.137715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-31 20:54:51.137726 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 20:54:51.137737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 20:54:51.137748 | orchestrator | skipping: [testbed-node-0] 2025-05-31 20:54:51.137764 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-31 20:54:51.138326 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 20:54:51.138350 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 20:54:51.138372 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-31 20:54:51.138384 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 20:54:51.138395 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 20:54:51.138406 | orchestrator | skipping: [testbed-node-1] 2025-05-31 20:54:51.138417 | orchestrator | skipping: [testbed-node-2] 2025-05-31 20:54:51.138427 | orchestrator | skipping: [testbed-node-3] 2025-05-31 20:54:51.138438 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-31 20:54:51.138450 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 20:54:51.138469 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 20:54:51.138480 | orchestrator | skipping: [testbed-node-4] 2025-05-31 20:54:51.138491 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-31 20:54:51.138511 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 20:54:51.138522 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 20:54:51.138533 | orchestrator | skipping: [testbed-node-5] 2025-05-31 20:54:51.138543 | orchestrator | 2025-05-31 20:54:51.138625 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-05-31 20:54:51.138638 | orchestrator | Saturday 31 May 2025 20:52:38 +0000 (0:00:03.108) 0:00:17.501 ********** 2025-05-31 20:54:51.138649 | orchestrator | skipping: [testbed-manager] 2025-05-31 20:54:51.138660 | orchestrator | skipping: [testbed-node-0] 2025-05-31 20:54:51.138671 | orchestrator | skipping: [testbed-node-1] 2025-05-31 20:54:51.138681 | orchestrator | skipping: [testbed-node-2] 2025-05-31 20:54:51.138692 | orchestrator | skipping: [testbed-node-3] 2025-05-31 20:54:51.138703 | orchestrator | skipping: [testbed-node-4] 2025-05-31 20:54:51.138713 | orchestrator | skipping: [testbed-node-5] 2025-05-31 20:54:51.138724 | orchestrator | 2025-05-31 20:54:51.138735 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-05-31 20:54:51.138746 | orchestrator | Saturday 31 May 2025 20:52:39 +0000 (0:00:00.979) 0:00:18.481 ********** 2025-05-31 20:54:51.138756 | orchestrator | skipping: [testbed-manager] 2025-05-31 20:54:51.138767 | orchestrator | skipping: [testbed-node-0] 2025-05-31 20:54:51.138778 | orchestrator | skipping: [testbed-node-1] 2025-05-31 20:54:51.138788 | orchestrator | skipping: [testbed-node-2] 2025-05-31 20:54:51.138799 | orchestrator | skipping: [testbed-node-3] 2025-05-31 20:54:51.138809 | orchestrator | skipping: [testbed-node-4] 2025-05-31 20:54:51.138820 | orchestrator | skipping: [testbed-node-5] 2025-05-31 20:54:51.138830 | orchestrator | 2025-05-31 20:54:51.138841 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-05-31 20:54:51.138852 | orchestrator | Saturday 31 May 2025 20:52:40 +0000 (0:00:00.933) 0:00:19.414 ********** 2025-05-31 20:54:51.138870 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-31 20:54:51.138887 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-31 20:54:51.138914 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-31 20:54:51.138928 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-31 20:54:51.138940 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:54:51.138953 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-31 20:54:51.138966 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:54:51.138978 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-31 20:54:51.138991 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:54:51.139022 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-31 20:54:51.139042 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:54:51.139055 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:54:51.139068 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:54:51.139080 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:54:51.139093 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:54:51.139105 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:54:51.139122 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:54:51.139148 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:54:51.139160 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:54:51.139171 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:54:51.139182 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:54:51.139193 | orchestrator | 2025-05-31 20:54:51.139204 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-05-31 20:54:51.139215 | orchestrator | Saturday 31 May 2025 20:52:46 +0000 (0:00:05.574) 0:00:24.989 ********** 2025-05-31 20:54:51.139227 | orchestrator | [WARNING]: Skipped 2025-05-31 20:54:51.139238 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-05-31 20:54:51.139250 | orchestrator | to this access issue: 2025-05-31 20:54:51.139260 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-05-31 20:54:51.139271 | orchestrator | directory 2025-05-31 20:54:51.139282 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-31 20:54:51.139292 | orchestrator | 2025-05-31 20:54:51.139301 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-05-31 20:54:51.139311 | orchestrator | Saturday 31 May 2025 20:52:47 +0000 (0:00:01.430) 0:00:26.419 ********** 2025-05-31 20:54:51.139320 | orchestrator | [WARNING]: Skipped 2025-05-31 20:54:51.139329 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-05-31 20:54:51.139339 | orchestrator | to this access issue: 2025-05-31 20:54:51.139348 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-05-31 20:54:51.139358 | orchestrator | directory 2025-05-31 20:54:51.139367 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-31 20:54:51.139377 | orchestrator | 2025-05-31 20:54:51.139386 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-05-31 20:54:51.139396 | orchestrator | Saturday 31 May 2025 20:52:48 +0000 (0:00:01.010) 0:00:27.430 ********** 2025-05-31 20:54:51.139405 | orchestrator | [WARNING]: Skipped 2025-05-31 20:54:51.139415 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-05-31 20:54:51.139424 | orchestrator | to this access issue: 2025-05-31 20:54:51.139443 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-05-31 20:54:51.139452 | orchestrator | directory 2025-05-31 20:54:51.139462 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-31 20:54:51.139471 | orchestrator | 2025-05-31 20:54:51.139480 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-05-31 20:54:51.139490 | orchestrator | Saturday 31 May 2025 20:52:49 +0000 (0:00:00.670) 0:00:28.101 ********** 2025-05-31 20:54:51.139499 | orchestrator | [WARNING]: Skipped 2025-05-31 20:54:51.139509 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-05-31 20:54:51.139518 | orchestrator | to this access issue: 2025-05-31 20:54:51.139527 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-05-31 20:54:51.139536 | orchestrator | directory 2025-05-31 20:54:51.139546 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-31 20:54:51.139576 | orchestrator | 2025-05-31 20:54:51.139586 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2025-05-31 20:54:51.139596 | orchestrator | Saturday 31 May 2025 20:52:50 +0000 (0:00:00.734) 0:00:28.835 ********** 2025-05-31 20:54:51.139605 | orchestrator | changed: [testbed-node-1] 2025-05-31 20:54:51.139615 | orchestrator | changed: [testbed-manager] 2025-05-31 20:54:51.139624 | orchestrator | changed: [testbed-node-0] 2025-05-31 20:54:51.139634 | orchestrator | changed: [testbed-node-3] 2025-05-31 20:54:51.139643 | orchestrator | changed: [testbed-node-2] 2025-05-31 20:54:51.139652 | orchestrator | changed: [testbed-node-4] 2025-05-31 20:54:51.139661 | orchestrator | changed: [testbed-node-5] 2025-05-31 20:54:51.139671 | orchestrator | 2025-05-31 20:54:51.139685 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-05-31 20:54:51.139694 | orchestrator | Saturday 31 May 2025 20:52:54 +0000 (0:00:04.465) 0:00:33.301 ********** 2025-05-31 20:54:51.139704 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-31 20:54:51.139714 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-31 20:54:51.139723 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-31 20:54:51.139738 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-31 20:54:51.139748 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-31 20:54:51.139757 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-31 20:54:51.139767 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-31 20:54:51.139776 | orchestrator | 2025-05-31 20:54:51.139786 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-05-31 20:54:51.139796 | orchestrator | Saturday 31 May 2025 20:52:57 +0000 (0:00:02.675) 0:00:35.977 ********** 2025-05-31 20:54:51.139805 | orchestrator | changed: [testbed-node-0] 2025-05-31 20:54:51.139815 | orchestrator | changed: [testbed-node-1] 2025-05-31 20:54:51.139824 | orchestrator | changed: [testbed-manager] 2025-05-31 20:54:51.139833 | orchestrator | changed: [testbed-node-2] 2025-05-31 20:54:51.139843 | orchestrator | changed: [testbed-node-3] 2025-05-31 20:54:51.139852 | orchestrator | changed: [testbed-node-4] 2025-05-31 20:54:51.139861 | orchestrator | changed: [testbed-node-5] 2025-05-31 20:54:51.139871 | orchestrator | 2025-05-31 20:54:51.139880 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-05-31 20:54:51.139889 | orchestrator | Saturday 31 May 2025 20:53:00 +0000 (0:00:02.953) 0:00:38.930 ********** 2025-05-31 20:54:51.139899 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-31 20:54:51.139916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 20:54:51.139926 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-31 20:54:51.139937 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 20:54:51.139951 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:54:51.139971 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:54:51.139982 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-31 20:54:51.139992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 20:54:51.140008 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-31 20:54:51.140018 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 20:54:51.140027 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:54:51.140037 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:54:51.140058 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-31 20:54:51.140079 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 20:54:51.140089 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:54:51.140099 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-31 20:54:51.140115 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 20:54:51.140125 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-31 20:54:51.140135 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 20:54:51.140145 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:54:51.140159 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:54:51.140169 | orchestrator | 2025-05-31 20:54:51.140179 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-05-31 20:54:51.140188 | orchestrator | Saturday 31 May 2025 20:53:03 +0000 (0:00:03.124) 0:00:42.054 ********** 2025-05-31 20:54:51.140198 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-31 20:54:51.140208 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-31 20:54:51.140217 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-31 20:54:51.140234 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-31 20:54:51.140244 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-31 20:54:51.140253 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-31 20:54:51.140263 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-31 20:54:51.140278 | orchestrator | 2025-05-31 20:54:51.140288 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-05-31 20:54:51.140297 | orchestrator | Saturday 31 May 2025 20:53:06 +0000 (0:00:02.683) 0:00:44.738 ********** 2025-05-31 20:54:51.140307 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-31 20:54:51.140316 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-31 20:54:51.140326 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-31 20:54:51.140335 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-31 20:54:51.140345 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-31 20:54:51.140354 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-31 20:54:51.140363 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-31 20:54:51.140373 | orchestrator | 2025-05-31 20:54:51.140383 | orchestrator | TASK [common : Check common containers] **************************************** 2025-05-31 20:54:51.140392 | orchestrator | Saturday 31 May 2025 20:53:09 +0000 (0:00:02.997) 0:00:47.735 ********** 2025-05-31 20:54:51.140402 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-31 20:54:51.140412 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-31 20:54:51.140422 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-31 20:54:51.140432 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-31 20:54:51.140446 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:54:51.140468 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:54:51.140478 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:54:51.140488 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-31 20:54:51.140498 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:54:51.140509 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:54:51.140518 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-31 20:54:51.140528 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:54:51.140569 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:54:51.140585 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-31 20:54:51.140596 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:54:51.140606 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:54:51.140616 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:54:51.140626 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:54:51.140636 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:54:51.140646 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:54:51.140666 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:54:51.140676 | orchestrator | 2025-05-31 20:54:51.140691 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-05-31 20:54:51.140701 | orchestrator | Saturday 31 May 2025 20:53:12 +0000 (0:00:03.708) 0:00:51.444 ********** 2025-05-31 20:54:51.140710 | orchestrator | changed: [testbed-manager] 2025-05-31 20:54:51.140720 | orchestrator | changed: [testbed-node-0] 2025-05-31 20:54:51.140729 | orchestrator | changed: [testbed-node-1] 2025-05-31 20:54:51.140739 | orchestrator | changed: [testbed-node-2] 2025-05-31 20:54:51.140748 | orchestrator | changed: [testbed-node-3] 2025-05-31 20:54:51.140757 | orchestrator | changed: [testbed-node-4] 2025-05-31 20:54:51.140766 | orchestrator | changed: [testbed-node-5] 2025-05-31 20:54:51.140776 | orchestrator | 2025-05-31 20:54:51.140786 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-05-31 20:54:51.140795 | orchestrator | Saturday 31 May 2025 20:53:14 +0000 (0:00:01.626) 0:00:53.070 ********** 2025-05-31 20:54:51.140805 | orchestrator | changed: [testbed-manager] 2025-05-31 20:54:51.140814 | orchestrator | changed: [testbed-node-0] 2025-05-31 20:54:51.140823 | orchestrator | changed: [testbed-node-1] 2025-05-31 20:54:51.140833 | orchestrator | changed: [testbed-node-2] 2025-05-31 20:54:51.140842 | orchestrator | changed: [testbed-node-3] 2025-05-31 20:54:51.140851 | orchestrator | changed: [testbed-node-4] 2025-05-31 20:54:51.140860 | orchestrator | changed: [testbed-node-5] 2025-05-31 20:54:51.140870 | orchestrator | 2025-05-31 20:54:51.140879 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-31 20:54:51.140889 | orchestrator | Saturday 31 May 2025 20:53:15 +0000 (0:00:01.132) 0:00:54.202 ********** 2025-05-31 20:54:51.140898 | orchestrator | 2025-05-31 20:54:51.140907 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-31 20:54:51.140917 | orchestrator | Saturday 31 May 2025 20:53:15 +0000 (0:00:00.062) 0:00:54.264 ********** 2025-05-31 20:54:51.140926 | orchestrator | 2025-05-31 20:54:51.140936 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-31 20:54:51.140945 | orchestrator | Saturday 31 May 2025 20:53:15 +0000 (0:00:00.076) 0:00:54.341 ********** 2025-05-31 20:54:51.140955 | orchestrator | 2025-05-31 20:54:51.140964 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-31 20:54:51.140974 | orchestrator | Saturday 31 May 2025 20:53:15 +0000 (0:00:00.062) 0:00:54.403 ********** 2025-05-31 20:54:51.140983 | orchestrator | 2025-05-31 20:54:51.140992 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-31 20:54:51.141001 | orchestrator | Saturday 31 May 2025 20:53:15 +0000 (0:00:00.060) 0:00:54.464 ********** 2025-05-31 20:54:51.141011 | orchestrator | 2025-05-31 20:54:51.141020 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-31 20:54:51.141030 | orchestrator | Saturday 31 May 2025 20:53:15 +0000 (0:00:00.182) 0:00:54.646 ********** 2025-05-31 20:54:51.141039 | orchestrator | 2025-05-31 20:54:51.141048 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-31 20:54:51.141058 | orchestrator | Saturday 31 May 2025 20:53:16 +0000 (0:00:00.067) 0:00:54.714 ********** 2025-05-31 20:54:51.141067 | orchestrator | 2025-05-31 20:54:51.141077 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-05-31 20:54:51.141086 | orchestrator | Saturday 31 May 2025 20:53:16 +0000 (0:00:00.073) 0:00:54.788 ********** 2025-05-31 20:54:51.141096 | orchestrator | changed: [testbed-node-0] 2025-05-31 20:54:51.141111 | orchestrator | changed: [testbed-manager] 2025-05-31 20:54:51.141120 | orchestrator | changed: [testbed-node-3] 2025-05-31 20:54:51.141129 | orchestrator | changed: [testbed-node-1] 2025-05-31 20:54:51.141139 | orchestrator | changed: [testbed-node-2] 2025-05-31 20:54:51.141148 | orchestrator | changed: [testbed-node-4] 2025-05-31 20:54:51.141157 | orchestrator | changed: [testbed-node-5] 2025-05-31 20:54:51.141167 | orchestrator | 2025-05-31 20:54:51.141176 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-05-31 20:54:51.141185 | orchestrator | Saturday 31 May 2025 20:53:58 +0000 (0:00:42.563) 0:01:37.351 ********** 2025-05-31 20:54:51.141195 | orchestrator | changed: [testbed-node-0] 2025-05-31 20:54:51.141204 | orchestrator | changed: [testbed-manager] 2025-05-31 20:54:51.141213 | orchestrator | changed: [testbed-node-5] 2025-05-31 20:54:51.141223 | orchestrator | changed: [testbed-node-3] 2025-05-31 20:54:51.141232 | orchestrator | changed: [testbed-node-2] 2025-05-31 20:54:51.141241 | orchestrator | changed: [testbed-node-4] 2025-05-31 20:54:51.141250 | orchestrator | changed: [testbed-node-1] 2025-05-31 20:54:51.141260 | orchestrator | 2025-05-31 20:54:51.141269 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-05-31 20:54:51.141279 | orchestrator | Saturday 31 May 2025 20:54:39 +0000 (0:00:40.339) 0:02:17.691 ********** 2025-05-31 20:54:51.141288 | orchestrator | ok: [testbed-manager] 2025-05-31 20:54:51.141298 | orchestrator | ok: [testbed-node-1] 2025-05-31 20:54:51.141307 | orchestrator | ok: [testbed-node-0] 2025-05-31 20:54:51.141316 | orchestrator | ok: [testbed-node-2] 2025-05-31 20:54:51.141325 | orchestrator | ok: [testbed-node-3] 2025-05-31 20:54:51.141335 | orchestrator | ok: [testbed-node-4] 2025-05-31 20:54:51.141344 | orchestrator | ok: [testbed-node-5] 2025-05-31 20:54:51.141355 | orchestrator | 2025-05-31 20:54:51.141372 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-05-31 20:54:51.141388 | orchestrator | Saturday 31 May 2025 20:54:41 +0000 (0:00:02.029) 0:02:19.721 ********** 2025-05-31 20:54:51.141410 | orchestrator | changed: [testbed-manager] 2025-05-31 20:54:51.141432 | orchestrator | changed: [testbed-node-1] 2025-05-31 20:54:51.141447 | orchestrator | changed: [testbed-node-0] 2025-05-31 20:54:51.141463 | orchestrator | changed: [testbed-node-4] 2025-05-31 20:54:51.141478 | orchestrator | changed: [testbed-node-2] 2025-05-31 20:54:51.141493 | orchestrator | changed: [testbed-node-3] 2025-05-31 20:54:51.141507 | orchestrator | changed: [testbed-node-5] 2025-05-31 20:54:51.141523 | orchestrator | 2025-05-31 20:54:51.141546 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 20:54:51.141596 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-05-31 20:54:51.141611 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-05-31 20:54:51.141636 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-05-31 20:54:51.141652 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-05-31 20:54:51.141668 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-05-31 20:54:51.141684 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-05-31 20:54:51.141700 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-05-31 20:54:51.141717 | orchestrator | 2025-05-31 20:54:51.141733 | orchestrator | 2025-05-31 20:54:51.141749 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-31 20:54:51.141774 | orchestrator | Saturday 31 May 2025 20:54:49 +0000 (0:00:08.916) 0:02:28.637 ********** 2025-05-31 20:54:51.141784 | orchestrator | =============================================================================== 2025-05-31 20:54:51.141793 | orchestrator | common : Restart fluentd container ------------------------------------- 42.56s 2025-05-31 20:54:51.141803 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 40.34s 2025-05-31 20:54:51.141812 | orchestrator | common : Restart cron container ----------------------------------------- 8.92s 2025-05-31 20:54:51.141821 | orchestrator | common : Copying over config.json files for services -------------------- 5.57s 2025-05-31 20:54:51.141831 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 5.50s 2025-05-31 20:54:51.141840 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 4.47s 2025-05-31 20:54:51.141849 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.38s 2025-05-31 20:54:51.141858 | orchestrator | common : Check common containers ---------------------------------------- 3.71s 2025-05-31 20:54:51.141868 | orchestrator | common : Ensuring config directories have correct owner and permission --- 3.12s 2025-05-31 20:54:51.141877 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.11s 2025-05-31 20:54:51.141886 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 3.00s 2025-05-31 20:54:51.141896 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.95s 2025-05-31 20:54:51.141905 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.68s 2025-05-31 20:54:51.141915 | orchestrator | common : Copying over cron logrotate config file ------------------------ 2.68s 2025-05-31 20:54:51.141924 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.03s 2025-05-31 20:54:51.141934 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.80s 2025-05-31 20:54:51.141943 | orchestrator | common : Creating log volume -------------------------------------------- 1.63s 2025-05-31 20:54:51.141952 | orchestrator | common : Find custom fluentd input config files ------------------------- 1.43s 2025-05-31 20:54:51.141962 | orchestrator | common : include_tasks -------------------------------------------------- 1.33s 2025-05-31 20:54:51.141971 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.13s 2025-05-31 20:54:51.142012 | orchestrator | 2025-05-31 20:54:51 | INFO  | Task 548ac3b1-5047-417d-b1ba-517bdaea8bf1 is in state STARTED 2025-05-31 20:54:51.142169 | orchestrator | 2025-05-31 20:54:51 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:54:54.207147 | orchestrator | 2025-05-31 20:54:54 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:54:54.207437 | orchestrator | 2025-05-31 20:54:54 | INFO  | Task d5252281-8371-4ac0-9448-8bb41d7ec9f6 is in state STARTED 2025-05-31 20:54:54.208636 | orchestrator | 2025-05-31 20:54:54 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:54:54.210619 | orchestrator | 2025-05-31 20:54:54 | INFO  | Task 93219aaa-8acd-4180-9edf-e960d4570954 is in state STARTED 2025-05-31 20:54:54.211519 | orchestrator | 2025-05-31 20:54:54 | INFO  | Task 552f438d-fa99-48b4-8a3b-5fe63fef980f is in state STARTED 2025-05-31 20:54:54.212734 | orchestrator | 2025-05-31 20:54:54 | INFO  | Task 548ac3b1-5047-417d-b1ba-517bdaea8bf1 is in state STARTED 2025-05-31 20:54:54.212874 | orchestrator | 2025-05-31 20:54:54 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:54:57.258163 | orchestrator | 2025-05-31 20:54:57 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:54:57.258360 | orchestrator | 2025-05-31 20:54:57 | INFO  | Task d5252281-8371-4ac0-9448-8bb41d7ec9f6 is in state STARTED 2025-05-31 20:54:57.258951 | orchestrator | 2025-05-31 20:54:57 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:54:57.259760 | orchestrator | 2025-05-31 20:54:57 | INFO  | Task 93219aaa-8acd-4180-9edf-e960d4570954 is in state STARTED 2025-05-31 20:54:57.260257 | orchestrator | 2025-05-31 20:54:57 | INFO  | Task 552f438d-fa99-48b4-8a3b-5fe63fef980f is in state STARTED 2025-05-31 20:54:57.261529 | orchestrator | 2025-05-31 20:54:57 | INFO  | Task 548ac3b1-5047-417d-b1ba-517bdaea8bf1 is in state STARTED 2025-05-31 20:54:57.262012 | orchestrator | 2025-05-31 20:54:57 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:55:00.300164 | orchestrator | 2025-05-31 20:55:00 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:55:00.300340 | orchestrator | 2025-05-31 20:55:00 | INFO  | Task d5252281-8371-4ac0-9448-8bb41d7ec9f6 is in state STARTED 2025-05-31 20:55:00.303527 | orchestrator | 2025-05-31 20:55:00 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:55:00.303900 | orchestrator | 2025-05-31 20:55:00 | INFO  | Task 93219aaa-8acd-4180-9edf-e960d4570954 is in state STARTED 2025-05-31 20:55:00.304376 | orchestrator | 2025-05-31 20:55:00 | INFO  | Task 552f438d-fa99-48b4-8a3b-5fe63fef980f is in state STARTED 2025-05-31 20:55:00.304839 | orchestrator | 2025-05-31 20:55:00 | INFO  | Task 548ac3b1-5047-417d-b1ba-517bdaea8bf1 is in state STARTED 2025-05-31 20:55:00.305143 | orchestrator | 2025-05-31 20:55:00 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:55:03.350357 | orchestrator | 2025-05-31 20:55:03 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:55:03.351353 | orchestrator | 2025-05-31 20:55:03 | INFO  | Task d5252281-8371-4ac0-9448-8bb41d7ec9f6 is in state STARTED 2025-05-31 20:55:03.351388 | orchestrator | 2025-05-31 20:55:03 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:55:03.351409 | orchestrator | 2025-05-31 20:55:03 | INFO  | Task 93219aaa-8acd-4180-9edf-e960d4570954 is in state STARTED 2025-05-31 20:55:03.351422 | orchestrator | 2025-05-31 20:55:03 | INFO  | Task 552f438d-fa99-48b4-8a3b-5fe63fef980f is in state STARTED 2025-05-31 20:55:03.351433 | orchestrator | 2025-05-31 20:55:03 | INFO  | Task 548ac3b1-5047-417d-b1ba-517bdaea8bf1 is in state STARTED 2025-05-31 20:55:03.351444 | orchestrator | 2025-05-31 20:55:03 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:55:06.397951 | orchestrator | 2025-05-31 20:55:06 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:55:06.398123 | orchestrator | 2025-05-31 20:55:06 | INFO  | Task d5252281-8371-4ac0-9448-8bb41d7ec9f6 is in state STARTED 2025-05-31 20:55:06.399391 | orchestrator | 2025-05-31 20:55:06 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:55:06.400028 | orchestrator | 2025-05-31 20:55:06 | INFO  | Task 93219aaa-8acd-4180-9edf-e960d4570954 is in state STARTED 2025-05-31 20:55:06.401730 | orchestrator | 2025-05-31 20:55:06 | INFO  | Task 552f438d-fa99-48b4-8a3b-5fe63fef980f is in state STARTED 2025-05-31 20:55:06.402905 | orchestrator | 2025-05-31 20:55:06 | INFO  | Task 548ac3b1-5047-417d-b1ba-517bdaea8bf1 is in state STARTED 2025-05-31 20:55:06.402947 | orchestrator | 2025-05-31 20:55:06 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:55:09.441381 | orchestrator | 2025-05-31 20:55:09 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:55:09.443345 | orchestrator | 2025-05-31 20:55:09 | INFO  | Task d5252281-8371-4ac0-9448-8bb41d7ec9f6 is in state STARTED 2025-05-31 20:55:09.448428 | orchestrator | 2025-05-31 20:55:09 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:55:09.449696 | orchestrator | 2025-05-31 20:55:09 | INFO  | Task 93219aaa-8acd-4180-9edf-e960d4570954 is in state STARTED 2025-05-31 20:55:09.450532 | orchestrator | 2025-05-31 20:55:09 | INFO  | Task 552f438d-fa99-48b4-8a3b-5fe63fef980f is in state STARTED 2025-05-31 20:55:09.451134 | orchestrator | 2025-05-31 20:55:09 | INFO  | Task 548ac3b1-5047-417d-b1ba-517bdaea8bf1 is in state STARTED 2025-05-31 20:55:09.451201 | orchestrator | 2025-05-31 20:55:09 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:55:12.484711 | orchestrator | 2025-05-31 20:55:12 | INFO  | Task fd314325-f586-41be-8318-ba939bda8e1c is in state STARTED 2025-05-31 20:55:12.485233 | orchestrator | 2025-05-31 20:55:12 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:55:12.486171 | orchestrator | 2025-05-31 20:55:12 | INFO  | Task d5252281-8371-4ac0-9448-8bb41d7ec9f6 is in state STARTED 2025-05-31 20:55:12.487466 | orchestrator | 2025-05-31 20:55:12 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:55:12.490234 | orchestrator | 2025-05-31 20:55:12 | INFO  | Task 93219aaa-8acd-4180-9edf-e960d4570954 is in state STARTED 2025-05-31 20:55:12.491160 | orchestrator | 2025-05-31 20:55:12 | INFO  | Task 552f438d-fa99-48b4-8a3b-5fe63fef980f is in state SUCCESS 2025-05-31 20:55:12.493460 | orchestrator | 2025-05-31 20:55:12 | INFO  | Task 548ac3b1-5047-417d-b1ba-517bdaea8bf1 is in state STARTED 2025-05-31 20:55:12.493555 | orchestrator | 2025-05-31 20:55:12 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:55:15.532259 | orchestrator | 2025-05-31 20:55:15 | INFO  | Task fd314325-f586-41be-8318-ba939bda8e1c is in state STARTED 2025-05-31 20:55:15.537110 | orchestrator | 2025-05-31 20:55:15 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:55:15.537153 | orchestrator | 2025-05-31 20:55:15 | INFO  | Task d5252281-8371-4ac0-9448-8bb41d7ec9f6 is in state STARTED 2025-05-31 20:55:15.537158 | orchestrator | 2025-05-31 20:55:15 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:55:15.537659 | orchestrator | 2025-05-31 20:55:15 | INFO  | Task 93219aaa-8acd-4180-9edf-e960d4570954 is in state STARTED 2025-05-31 20:55:15.538332 | orchestrator | 2025-05-31 20:55:15 | INFO  | Task 548ac3b1-5047-417d-b1ba-517bdaea8bf1 is in state STARTED 2025-05-31 20:55:15.540899 | orchestrator | 2025-05-31 20:55:15 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:55:18.567537 | orchestrator | 2025-05-31 20:55:18 | INFO  | Task fd314325-f586-41be-8318-ba939bda8e1c is in state STARTED 2025-05-31 20:55:18.572504 | orchestrator | 2025-05-31 20:55:18 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:55:18.573690 | orchestrator | 2025-05-31 20:55:18 | INFO  | Task d5252281-8371-4ac0-9448-8bb41d7ec9f6 is in state STARTED 2025-05-31 20:55:18.573725 | orchestrator | 2025-05-31 20:55:18 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:55:18.574364 | orchestrator | 2025-05-31 20:55:18 | INFO  | Task 93219aaa-8acd-4180-9edf-e960d4570954 is in state STARTED 2025-05-31 20:55:18.574932 | orchestrator | 2025-05-31 20:55:18 | INFO  | Task 548ac3b1-5047-417d-b1ba-517bdaea8bf1 is in state STARTED 2025-05-31 20:55:18.574957 | orchestrator | 2025-05-31 20:55:18 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:55:21.602891 | orchestrator | 2025-05-31 20:55:21 | INFO  | Task fd314325-f586-41be-8318-ba939bda8e1c is in state STARTED 2025-05-31 20:55:21.603940 | orchestrator | 2025-05-31 20:55:21 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:55:21.604506 | orchestrator | 2025-05-31 20:55:21 | INFO  | Task d5252281-8371-4ac0-9448-8bb41d7ec9f6 is in state STARTED 2025-05-31 20:55:21.605591 | orchestrator | 2025-05-31 20:55:21 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:55:21.607360 | orchestrator | 2025-05-31 20:55:21 | INFO  | Task 93219aaa-8acd-4180-9edf-e960d4570954 is in state STARTED 2025-05-31 20:55:21.608146 | orchestrator | 2025-05-31 20:55:21 | INFO  | Task 548ac3b1-5047-417d-b1ba-517bdaea8bf1 is in state STARTED 2025-05-31 20:55:21.608162 | orchestrator | 2025-05-31 20:55:21 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:55:24.655303 | orchestrator | 2025-05-31 20:55:24 | INFO  | Task fd314325-f586-41be-8318-ba939bda8e1c is in state STARTED 2025-05-31 20:55:24.657205 | orchestrator | 2025-05-31 20:55:24 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:55:24.660087 | orchestrator | 2025-05-31 20:55:24 | INFO  | Task d5252281-8371-4ac0-9448-8bb41d7ec9f6 is in state STARTED 2025-05-31 20:55:24.662249 | orchestrator | 2025-05-31 20:55:24 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:55:24.663676 | orchestrator | 2025-05-31 20:55:24 | INFO  | Task 93219aaa-8acd-4180-9edf-e960d4570954 is in state STARTED 2025-05-31 20:55:24.665287 | orchestrator | 2025-05-31 20:55:24 | INFO  | Task 548ac3b1-5047-417d-b1ba-517bdaea8bf1 is in state STARTED 2025-05-31 20:55:24.665376 | orchestrator | 2025-05-31 20:55:24 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:55:27.704832 | orchestrator | 2025-05-31 20:55:27 | INFO  | Task fd314325-f586-41be-8318-ba939bda8e1c is in state STARTED 2025-05-31 20:55:27.705604 | orchestrator | 2025-05-31 20:55:27 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:55:27.706122 | orchestrator | 2025-05-31 20:55:27 | INFO  | Task d5252281-8371-4ac0-9448-8bb41d7ec9f6 is in state STARTED 2025-05-31 20:55:27.707185 | orchestrator | 2025-05-31 20:55:27 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:55:27.708524 | orchestrator | 2025-05-31 20:55:27 | INFO  | Task 93219aaa-8acd-4180-9edf-e960d4570954 is in state SUCCESS 2025-05-31 20:55:27.709412 | orchestrator | 2025-05-31 20:55:27.709445 | orchestrator | 2025-05-31 20:55:27.709457 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-31 20:55:27.709469 | orchestrator | 2025-05-31 20:55:27.709480 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-31 20:55:27.709492 | orchestrator | Saturday 31 May 2025 20:54:57 +0000 (0:00:00.317) 0:00:00.317 ********** 2025-05-31 20:55:27.709503 | orchestrator | ok: [testbed-node-0] 2025-05-31 20:55:27.709516 | orchestrator | ok: [testbed-node-1] 2025-05-31 20:55:27.709528 | orchestrator | ok: [testbed-node-2] 2025-05-31 20:55:27.709540 | orchestrator | 2025-05-31 20:55:27.709550 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-31 20:55:27.709561 | orchestrator | Saturday 31 May 2025 20:54:58 +0000 (0:00:00.484) 0:00:00.801 ********** 2025-05-31 20:55:27.709574 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-05-31 20:55:27.709585 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-05-31 20:55:27.709596 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-05-31 20:55:27.709607 | orchestrator | 2025-05-31 20:55:27.709617 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-05-31 20:55:27.709664 | orchestrator | 2025-05-31 20:55:27.709676 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-05-31 20:55:27.709687 | orchestrator | Saturday 31 May 2025 20:54:59 +0000 (0:00:00.842) 0:00:01.644 ********** 2025-05-31 20:55:27.709697 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 20:55:27.710200 | orchestrator | 2025-05-31 20:55:27.710300 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-05-31 20:55:27.710318 | orchestrator | Saturday 31 May 2025 20:55:00 +0000 (0:00:01.013) 0:00:02.657 ********** 2025-05-31 20:55:27.710332 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-05-31 20:55:27.710344 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-05-31 20:55:27.710355 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-05-31 20:55:27.710366 | orchestrator | 2025-05-31 20:55:27.710377 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-05-31 20:55:27.710388 | orchestrator | Saturday 31 May 2025 20:55:01 +0000 (0:00:00.969) 0:00:03.626 ********** 2025-05-31 20:55:27.710399 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-05-31 20:55:27.710410 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-05-31 20:55:27.710421 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-05-31 20:55:27.710431 | orchestrator | 2025-05-31 20:55:27.710442 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-05-31 20:55:27.710453 | orchestrator | Saturday 31 May 2025 20:55:03 +0000 (0:00:02.683) 0:00:06.309 ********** 2025-05-31 20:55:27.710463 | orchestrator | changed: [testbed-node-2] 2025-05-31 20:55:27.710476 | orchestrator | changed: [testbed-node-0] 2025-05-31 20:55:27.710486 | orchestrator | changed: [testbed-node-1] 2025-05-31 20:55:27.710497 | orchestrator | 2025-05-31 20:55:27.710508 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-05-31 20:55:27.710518 | orchestrator | Saturday 31 May 2025 20:55:06 +0000 (0:00:02.516) 0:00:08.826 ********** 2025-05-31 20:55:27.710529 | orchestrator | changed: [testbed-node-0] 2025-05-31 20:55:27.710539 | orchestrator | changed: [testbed-node-1] 2025-05-31 20:55:27.710550 | orchestrator | changed: [testbed-node-2] 2025-05-31 20:55:27.710561 | orchestrator | 2025-05-31 20:55:27.710571 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 20:55:27.710582 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 20:55:27.710595 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 20:55:27.710606 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 20:55:27.710617 | orchestrator | 2025-05-31 20:55:27.710663 | orchestrator | 2025-05-31 20:55:27.710674 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-31 20:55:27.710685 | orchestrator | Saturday 31 May 2025 20:55:09 +0000 (0:00:02.671) 0:00:11.497 ********** 2025-05-31 20:55:27.710696 | orchestrator | =============================================================================== 2025-05-31 20:55:27.710707 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.68s 2025-05-31 20:55:27.710717 | orchestrator | memcached : Restart memcached container --------------------------------- 2.68s 2025-05-31 20:55:27.710728 | orchestrator | memcached : Check memcached container ----------------------------------- 2.52s 2025-05-31 20:55:27.710739 | orchestrator | memcached : include_tasks ----------------------------------------------- 1.01s 2025-05-31 20:55:27.710767 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.97s 2025-05-31 20:55:27.710780 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.84s 2025-05-31 20:55:27.710790 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.48s 2025-05-31 20:55:27.710801 | orchestrator | 2025-05-31 20:55:27.710812 | orchestrator | 2025-05-31 20:55:27.710831 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-31 20:55:27.710849 | orchestrator | 2025-05-31 20:55:27.710869 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-31 20:55:27.710901 | orchestrator | Saturday 31 May 2025 20:54:58 +0000 (0:00:00.560) 0:00:00.560 ********** 2025-05-31 20:55:27.710913 | orchestrator | ok: [testbed-node-0] 2025-05-31 20:55:27.710931 | orchestrator | ok: [testbed-node-1] 2025-05-31 20:55:27.710950 | orchestrator | ok: [testbed-node-2] 2025-05-31 20:55:27.710969 | orchestrator | 2025-05-31 20:55:27.710980 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-31 20:55:27.711031 | orchestrator | Saturday 31 May 2025 20:54:58 +0000 (0:00:00.484) 0:00:01.044 ********** 2025-05-31 20:55:27.711043 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-05-31 20:55:27.711054 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-05-31 20:55:27.711065 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-05-31 20:55:27.711076 | orchestrator | 2025-05-31 20:55:27.711086 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-05-31 20:55:27.711097 | orchestrator | 2025-05-31 20:55:27.711108 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-05-31 20:55:27.711118 | orchestrator | Saturday 31 May 2025 20:54:59 +0000 (0:00:00.984) 0:00:02.028 ********** 2025-05-31 20:55:27.711129 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 20:55:27.711140 | orchestrator | 2025-05-31 20:55:27.711151 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-05-31 20:55:27.711161 | orchestrator | Saturday 31 May 2025 20:55:00 +0000 (0:00:00.933) 0:00:02.962 ********** 2025-05-31 20:55:27.711175 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-31 20:55:27.711194 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-31 20:55:27.711205 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-31 20:55:27.711217 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-31 20:55:27.711257 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-31 20:55:27.711279 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-31 20:55:27.711291 | orchestrator | 2025-05-31 20:55:27.711303 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-05-31 20:55:27.711314 | orchestrator | Saturday 31 May 2025 20:55:02 +0000 (0:00:01.920) 0:00:04.882 ********** 2025-05-31 20:55:27.711325 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-31 20:55:27.711337 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-31 20:55:27.711349 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-31 20:55:27.711360 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-31 20:55:27.711384 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-31 20:55:27.711403 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-31 20:55:27.711415 | orchestrator | 2025-05-31 20:55:27.711426 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-05-31 20:55:27.711437 | orchestrator | Saturday 31 May 2025 20:55:06 +0000 (0:00:03.388) 0:00:08.271 ********** 2025-05-31 20:55:27.711448 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-31 20:55:27.711460 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-31 20:55:27.711471 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-31 20:55:27.711483 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-31 20:55:27.711502 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-31 20:55:27.711514 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-31 20:55:27.711525 | orchestrator | 2025-05-31 20:55:27.711542 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-05-31 20:55:27.711553 | orchestrator | Saturday 31 May 2025 20:55:09 +0000 (0:00:03.472) 0:00:11.743 ********** 2025-05-31 20:55:27.711572 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-31 20:55:27.711584 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-31 20:55:27.711596 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-31 20:55:27.711607 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-31 20:55:27.711794 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-31 20:55:27.711858 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-31 20:55:27.711870 | orchestrator | 2025-05-31 20:55:27.711882 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-05-31 20:55:27.711893 | orchestrator | Saturday 31 May 2025 20:55:11 +0000 (0:00:01.810) 0:00:13.554 ********** 2025-05-31 20:55:27.711904 | orchestrator | 2025-05-31 20:55:27.711929 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-05-31 20:55:27.711953 | orchestrator | Saturday 31 May 2025 20:55:11 +0000 (0:00:00.154) 0:00:13.708 ********** 2025-05-31 20:55:27.711964 | orchestrator | 2025-05-31 20:55:27.711975 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-05-31 20:55:27.711986 | orchestrator | Saturday 31 May 2025 20:55:11 +0000 (0:00:00.186) 0:00:13.894 ********** 2025-05-31 20:55:27.711997 | orchestrator | 2025-05-31 20:55:27.712008 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-05-31 20:55:27.712019 | orchestrator | Saturday 31 May 2025 20:55:12 +0000 (0:00:00.255) 0:00:14.150 ********** 2025-05-31 20:55:27.712029 | orchestrator | changed: [testbed-node-0] 2025-05-31 20:55:27.712040 | orchestrator | changed: [testbed-node-2] 2025-05-31 20:55:27.712051 | orchestrator | changed: [testbed-node-1] 2025-05-31 20:55:27.712062 | orchestrator | 2025-05-31 20:55:27.712086 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-05-31 20:55:27.712097 | orchestrator | Saturday 31 May 2025 20:55:21 +0000 (0:00:09.442) 0:00:23.593 ********** 2025-05-31 20:55:27.712107 | orchestrator | changed: [testbed-node-0] 2025-05-31 20:55:27.712118 | orchestrator | changed: [testbed-node-1] 2025-05-31 20:55:27.712129 | orchestrator | changed: [testbed-node-2] 2025-05-31 20:55:27.712140 | orchestrator | 2025-05-31 20:55:27.712150 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 20:55:27.712161 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 20:55:27.712173 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 20:55:27.712184 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 20:55:27.712195 | orchestrator | 2025-05-31 20:55:27.712206 | orchestrator | 2025-05-31 20:55:27.712216 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-31 20:55:27.712236 | orchestrator | Saturday 31 May 2025 20:55:26 +0000 (0:00:04.637) 0:00:28.230 ********** 2025-05-31 20:55:27.712250 | orchestrator | =============================================================================== 2025-05-31 20:55:27.712274 | orchestrator | redis : Restart redis container ----------------------------------------- 9.44s 2025-05-31 20:55:27.712300 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 4.64s 2025-05-31 20:55:27.712317 | orchestrator | redis : Copying over redis config files --------------------------------- 3.47s 2025-05-31 20:55:27.712363 | orchestrator | redis : Copying over default config.json files -------------------------- 3.39s 2025-05-31 20:55:27.712380 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.92s 2025-05-31 20:55:27.712397 | orchestrator | redis : Check redis containers ------------------------------------------ 1.81s 2025-05-31 20:55:27.712413 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.98s 2025-05-31 20:55:27.712429 | orchestrator | redis : include_tasks --------------------------------------------------- 0.93s 2025-05-31 20:55:27.712444 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.60s 2025-05-31 20:55:27.712462 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.48s 2025-05-31 20:55:27.712480 | orchestrator | 2025-05-31 20:55:27 | INFO  | Task 548ac3b1-5047-417d-b1ba-517bdaea8bf1 is in state STARTED 2025-05-31 20:55:27.712497 | orchestrator | 2025-05-31 20:55:27 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:55:30.738418 | orchestrator | 2025-05-31 20:55:30 | INFO  | Task fd314325-f586-41be-8318-ba939bda8e1c is in state STARTED 2025-05-31 20:55:30.741702 | orchestrator | 2025-05-31 20:55:30 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:55:30.744050 | orchestrator | 2025-05-31 20:55:30 | INFO  | Task d5252281-8371-4ac0-9448-8bb41d7ec9f6 is in state STARTED 2025-05-31 20:55:30.744106 | orchestrator | 2025-05-31 20:55:30 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:55:30.744120 | orchestrator | 2025-05-31 20:55:30 | INFO  | Task 548ac3b1-5047-417d-b1ba-517bdaea8bf1 is in state STARTED 2025-05-31 20:55:30.744132 | orchestrator | 2025-05-31 20:55:30 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:55:33.814497 | orchestrator | 2025-05-31 20:55:33 | INFO  | Task fd314325-f586-41be-8318-ba939bda8e1c is in state STARTED 2025-05-31 20:55:33.814621 | orchestrator | 2025-05-31 20:55:33 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:55:33.814951 | orchestrator | 2025-05-31 20:55:33 | INFO  | Task d5252281-8371-4ac0-9448-8bb41d7ec9f6 is in state STARTED 2025-05-31 20:55:33.815923 | orchestrator | 2025-05-31 20:55:33 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:55:33.816803 | orchestrator | 2025-05-31 20:55:33 | INFO  | Task 548ac3b1-5047-417d-b1ba-517bdaea8bf1 is in state STARTED 2025-05-31 20:55:33.816855 | orchestrator | 2025-05-31 20:55:33 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:55:36.848129 | orchestrator | 2025-05-31 20:55:36 | INFO  | Task fd314325-f586-41be-8318-ba939bda8e1c is in state STARTED 2025-05-31 20:55:36.848371 | orchestrator | 2025-05-31 20:55:36 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:55:36.850250 | orchestrator | 2025-05-31 20:55:36 | INFO  | Task d5252281-8371-4ac0-9448-8bb41d7ec9f6 is in state STARTED 2025-05-31 20:55:36.850927 | orchestrator | 2025-05-31 20:55:36 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:55:36.851624 | orchestrator | 2025-05-31 20:55:36 | INFO  | Task 548ac3b1-5047-417d-b1ba-517bdaea8bf1 is in state STARTED 2025-05-31 20:55:36.851691 | orchestrator | 2025-05-31 20:55:36 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:55:39.903742 | orchestrator | 2025-05-31 20:55:39 | INFO  | Task fd314325-f586-41be-8318-ba939bda8e1c is in state STARTED 2025-05-31 20:55:39.903937 | orchestrator | 2025-05-31 20:55:39 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:55:39.905803 | orchestrator | 2025-05-31 20:55:39 | INFO  | Task d5252281-8371-4ac0-9448-8bb41d7ec9f6 is in state STARTED 2025-05-31 20:55:39.909275 | orchestrator | 2025-05-31 20:55:39 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:55:39.911961 | orchestrator | 2025-05-31 20:55:39 | INFO  | Task 548ac3b1-5047-417d-b1ba-517bdaea8bf1 is in state STARTED 2025-05-31 20:55:39.911990 | orchestrator | 2025-05-31 20:55:39 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:55:42.950759 | orchestrator | 2025-05-31 20:55:42 | INFO  | Task fd314325-f586-41be-8318-ba939bda8e1c is in state STARTED 2025-05-31 20:55:42.950905 | orchestrator | 2025-05-31 20:55:42 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:55:42.951729 | orchestrator | 2025-05-31 20:55:42 | INFO  | Task d5252281-8371-4ac0-9448-8bb41d7ec9f6 is in state STARTED 2025-05-31 20:55:42.952900 | orchestrator | 2025-05-31 20:55:42 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:55:42.953593 | orchestrator | 2025-05-31 20:55:42 | INFO  | Task 548ac3b1-5047-417d-b1ba-517bdaea8bf1 is in state STARTED 2025-05-31 20:55:42.953727 | orchestrator | 2025-05-31 20:55:42 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:55:45.997263 | orchestrator | 2025-05-31 20:55:45 | INFO  | Task fd314325-f586-41be-8318-ba939bda8e1c is in state STARTED 2025-05-31 20:55:45.997725 | orchestrator | 2025-05-31 20:55:45 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:55:45.998645 | orchestrator | 2025-05-31 20:55:45 | INFO  | Task d5252281-8371-4ac0-9448-8bb41d7ec9f6 is in state STARTED 2025-05-31 20:55:46.000171 | orchestrator | 2025-05-31 20:55:45 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:55:46.001010 | orchestrator | 2025-05-31 20:55:45 | INFO  | Task 548ac3b1-5047-417d-b1ba-517bdaea8bf1 is in state STARTED 2025-05-31 20:55:46.001085 | orchestrator | 2025-05-31 20:55:45 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:55:49.040212 | orchestrator | 2025-05-31 20:55:49 | INFO  | Task fd314325-f586-41be-8318-ba939bda8e1c is in state STARTED 2025-05-31 20:55:49.042414 | orchestrator | 2025-05-31 20:55:49 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:55:49.045023 | orchestrator | 2025-05-31 20:55:49 | INFO  | Task d5252281-8371-4ac0-9448-8bb41d7ec9f6 is in state STARTED 2025-05-31 20:55:49.047342 | orchestrator | 2025-05-31 20:55:49 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:55:49.048795 | orchestrator | 2025-05-31 20:55:49 | INFO  | Task 548ac3b1-5047-417d-b1ba-517bdaea8bf1 is in state STARTED 2025-05-31 20:55:49.049332 | orchestrator | 2025-05-31 20:55:49 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:55:52.099199 | orchestrator | 2025-05-31 20:55:52 | INFO  | Task fd314325-f586-41be-8318-ba939bda8e1c is in state STARTED 2025-05-31 20:55:52.100263 | orchestrator | 2025-05-31 20:55:52 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:55:52.100388 | orchestrator | 2025-05-31 20:55:52 | INFO  | Task d5252281-8371-4ac0-9448-8bb41d7ec9f6 is in state STARTED 2025-05-31 20:55:52.100753 | orchestrator | 2025-05-31 20:55:52 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:55:52.104176 | orchestrator | 2025-05-31 20:55:52 | INFO  | Task 548ac3b1-5047-417d-b1ba-517bdaea8bf1 is in state STARTED 2025-05-31 20:55:52.104220 | orchestrator | 2025-05-31 20:55:52 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:55:55.141629 | orchestrator | 2025-05-31 20:55:55 | INFO  | Task fd314325-f586-41be-8318-ba939bda8e1c is in state STARTED 2025-05-31 20:55:55.143424 | orchestrator | 2025-05-31 20:55:55 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:55:55.143455 | orchestrator | 2025-05-31 20:55:55 | INFO  | Task d5252281-8371-4ac0-9448-8bb41d7ec9f6 is in state STARTED 2025-05-31 20:55:55.144278 | orchestrator | 2025-05-31 20:55:55 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:55:55.145522 | orchestrator | 2025-05-31 20:55:55 | INFO  | Task 548ac3b1-5047-417d-b1ba-517bdaea8bf1 is in state STARTED 2025-05-31 20:55:55.145562 | orchestrator | 2025-05-31 20:55:55 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:55:58.174381 | orchestrator | 2025-05-31 20:55:58 | INFO  | Task fd314325-f586-41be-8318-ba939bda8e1c is in state STARTED 2025-05-31 20:55:58.177035 | orchestrator | 2025-05-31 20:55:58 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:55:58.182538 | orchestrator | 2025-05-31 20:55:58 | INFO  | Task d5252281-8371-4ac0-9448-8bb41d7ec9f6 is in state STARTED 2025-05-31 20:55:58.182567 | orchestrator | 2025-05-31 20:55:58 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:55:58.184714 | orchestrator | 2025-05-31 20:55:58 | INFO  | Task 548ac3b1-5047-417d-b1ba-517bdaea8bf1 is in state STARTED 2025-05-31 20:55:58.185079 | orchestrator | 2025-05-31 20:55:58 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:56:01.223041 | orchestrator | 2025-05-31 20:56:01 | INFO  | Task fd314325-f586-41be-8318-ba939bda8e1c is in state STARTED 2025-05-31 20:56:01.224795 | orchestrator | 2025-05-31 20:56:01 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:56:01.226656 | orchestrator | 2025-05-31 20:56:01 | INFO  | Task d5252281-8371-4ac0-9448-8bb41d7ec9f6 is in state STARTED 2025-05-31 20:56:01.227645 | orchestrator | 2025-05-31 20:56:01 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:56:01.229419 | orchestrator | 2025-05-31 20:56:01 | INFO  | Task 548ac3b1-5047-417d-b1ba-517bdaea8bf1 is in state STARTED 2025-05-31 20:56:01.229456 | orchestrator | 2025-05-31 20:56:01 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:56:04.277201 | orchestrator | 2025-05-31 20:56:04 | INFO  | Task fd314325-f586-41be-8318-ba939bda8e1c is in state STARTED 2025-05-31 20:56:04.279493 | orchestrator | 2025-05-31 20:56:04 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:56:04.282095 | orchestrator | 2025-05-31 20:56:04 | INFO  | Task d5252281-8371-4ac0-9448-8bb41d7ec9f6 is in state SUCCESS 2025-05-31 20:56:04.285128 | orchestrator | 2025-05-31 20:56:04.285202 | orchestrator | 2025-05-31 20:56:04.285219 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-31 20:56:04.285232 | orchestrator | 2025-05-31 20:56:04.285243 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-31 20:56:04.285255 | orchestrator | Saturday 31 May 2025 20:54:58 +0000 (0:00:00.399) 0:00:00.399 ********** 2025-05-31 20:56:04.285266 | orchestrator | ok: [testbed-node-0] 2025-05-31 20:56:04.285278 | orchestrator | ok: [testbed-node-1] 2025-05-31 20:56:04.285289 | orchestrator | ok: [testbed-node-2] 2025-05-31 20:56:04.285299 | orchestrator | ok: [testbed-node-3] 2025-05-31 20:56:04.285337 | orchestrator | ok: [testbed-node-4] 2025-05-31 20:56:04.285355 | orchestrator | ok: [testbed-node-5] 2025-05-31 20:56:04.285374 | orchestrator | 2025-05-31 20:56:04.285392 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-31 20:56:04.285411 | orchestrator | Saturday 31 May 2025 20:54:59 +0000 (0:00:01.306) 0:00:01.706 ********** 2025-05-31 20:56:04.285429 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-31 20:56:04.285447 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-31 20:56:04.285464 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-31 20:56:04.285482 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-31 20:56:04.285501 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-31 20:56:04.285531 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-31 20:56:04.285543 | orchestrator | 2025-05-31 20:56:04.285554 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-05-31 20:56:04.285565 | orchestrator | 2025-05-31 20:56:04.285576 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-05-31 20:56:04.285587 | orchestrator | Saturday 31 May 2025 20:55:00 +0000 (0:00:00.821) 0:00:02.527 ********** 2025-05-31 20:56:04.285599 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 20:56:04.285612 | orchestrator | 2025-05-31 20:56:04.285623 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-05-31 20:56:04.285635 | orchestrator | Saturday 31 May 2025 20:55:02 +0000 (0:00:02.134) 0:00:04.661 ********** 2025-05-31 20:56:04.285647 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-05-31 20:56:04.285658 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-05-31 20:56:04.285669 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-05-31 20:56:04.285680 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-05-31 20:56:04.285733 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-05-31 20:56:04.285752 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-05-31 20:56:04.285770 | orchestrator | 2025-05-31 20:56:04.285787 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-05-31 20:56:04.285803 | orchestrator | Saturday 31 May 2025 20:55:04 +0000 (0:00:01.880) 0:00:06.542 ********** 2025-05-31 20:56:04.285814 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-05-31 20:56:04.285824 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-05-31 20:56:04.285835 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-05-31 20:56:04.285846 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-05-31 20:56:04.285856 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-05-31 20:56:04.285867 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-05-31 20:56:04.285877 | orchestrator | 2025-05-31 20:56:04.285888 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-05-31 20:56:04.285899 | orchestrator | Saturday 31 May 2025 20:55:06 +0000 (0:00:01.904) 0:00:08.447 ********** 2025-05-31 20:56:04.285910 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-05-31 20:56:04.285921 | orchestrator | skipping: [testbed-node-0] 2025-05-31 20:56:04.285932 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-05-31 20:56:04.285943 | orchestrator | skipping: [testbed-node-1] 2025-05-31 20:56:04.285953 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-05-31 20:56:04.285964 | orchestrator | skipping: [testbed-node-2] 2025-05-31 20:56:04.285974 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-05-31 20:56:04.285985 | orchestrator | skipping: [testbed-node-3] 2025-05-31 20:56:04.286005 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-05-31 20:56:04.286082 | orchestrator | skipping: [testbed-node-4] 2025-05-31 20:56:04.286097 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-05-31 20:56:04.286108 | orchestrator | skipping: [testbed-node-5] 2025-05-31 20:56:04.286119 | orchestrator | 2025-05-31 20:56:04.286130 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-05-31 20:56:04.286140 | orchestrator | Saturday 31 May 2025 20:55:07 +0000 (0:00:01.767) 0:00:10.215 ********** 2025-05-31 20:56:04.286151 | orchestrator | skipping: [testbed-node-0] 2025-05-31 20:56:04.286162 | orchestrator | skipping: [testbed-node-1] 2025-05-31 20:56:04.286172 | orchestrator | skipping: [testbed-node-2] 2025-05-31 20:56:04.286183 | orchestrator | skipping: [testbed-node-3] 2025-05-31 20:56:04.286194 | orchestrator | skipping: [testbed-node-4] 2025-05-31 20:56:04.286204 | orchestrator | skipping: [testbed-node-5] 2025-05-31 20:56:04.286215 | orchestrator | 2025-05-31 20:56:04.286225 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-05-31 20:56:04.286236 | orchestrator | Saturday 31 May 2025 20:55:08 +0000 (0:00:00.798) 0:00:11.013 ********** 2025-05-31 20:56:04.286273 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-31 20:56:04.286294 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-31 20:56:04.286306 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-31 20:56:04.286318 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-31 20:56:04.286337 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-31 20:56:04.286348 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-31 20:56:04.286368 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-31 20:56:04.286391 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-31 20:56:04.286410 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-31 20:56:04.286431 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-31 20:56:04.286461 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-31 20:56:04.286491 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-31 20:56:04.286511 | orchestrator | 2025-05-31 20:56:04.286530 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-05-31 20:56:04.286550 | orchestrator | Saturday 31 May 2025 20:55:10 +0000 (0:00:02.226) 0:00:13.239 ********** 2025-05-31 20:56:04.286570 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-31 20:56:04.286589 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-31 20:56:04.286600 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-31 20:56:04.286619 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-31 20:56:04.286630 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-31 20:56:04.286663 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-31 20:56:04.286675 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-31 20:56:04.286724 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-31 20:56:04.286737 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-31 20:56:04.286755 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-31 20:56:04.286766 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-31 20:56:04.286785 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-31 20:56:04.286797 | orchestrator | 2025-05-31 20:56:04.286808 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-05-31 20:56:04.286819 | orchestrator | Saturday 31 May 2025 20:55:15 +0000 (0:00:04.731) 0:00:17.971 ********** 2025-05-31 20:56:04.286830 | orchestrator | skipping: [testbed-node-0] 2025-05-31 20:56:04.286841 | orchestrator | skipping: [testbed-node-1] 2025-05-31 20:56:04.286852 | orchestrator | skipping: [testbed-node-2] 2025-05-31 20:56:04.286863 | orchestrator | skipping: [testbed-node-3] 2025-05-31 20:56:04.286873 | orchestrator | skipping: [testbed-node-4] 2025-05-31 20:56:04.286884 | orchestrator | skipping: [testbed-node-5] 2025-05-31 20:56:04.286895 | orchestrator | 2025-05-31 20:56:04.286905 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-05-31 20:56:04.286916 | orchestrator | Saturday 31 May 2025 20:55:17 +0000 (0:00:01.541) 0:00:19.513 ********** 2025-05-31 20:56:04.286932 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-31 20:56:04.286950 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-31 20:56:04.286962 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-31 20:56:04.286973 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-31 20:56:04.286991 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-31 20:56:04.287003 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-31 20:56:04.287014 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-31 20:56:04.287032 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-31 20:56:04.287043 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-31 20:56:04.287066 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-31 20:56:04.287085 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-31 20:56:04.287102 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-31 20:56:04.287119 | orchestrator | 2025-05-31 20:56:04.287130 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-31 20:56:04.287141 | orchestrator | Saturday 31 May 2025 20:55:19 +0000 (0:00:02.650) 0:00:22.163 ********** 2025-05-31 20:56:04.287152 | orchestrator | 2025-05-31 20:56:04.287163 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-31 20:56:04.287173 | orchestrator | Saturday 31 May 2025 20:55:19 +0000 (0:00:00.142) 0:00:22.305 ********** 2025-05-31 20:56:04.287184 | orchestrator | 2025-05-31 20:56:04.287195 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-31 20:56:04.287205 | orchestrator | Saturday 31 May 2025 20:55:20 +0000 (0:00:00.200) 0:00:22.506 ********** 2025-05-31 20:56:04.287216 | orchestrator | 2025-05-31 20:56:04.287227 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-31 20:56:04.287237 | orchestrator | Saturday 31 May 2025 20:55:20 +0000 (0:00:00.145) 0:00:22.651 ********** 2025-05-31 20:56:04.287248 | orchestrator | 2025-05-31 20:56:04.287258 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-31 20:56:04.287269 | orchestrator | Saturday 31 May 2025 20:55:20 +0000 (0:00:00.306) 0:00:22.958 ********** 2025-05-31 20:56:04.287280 | orchestrator | 2025-05-31 20:56:04.287290 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-31 20:56:04.287301 | orchestrator | Saturday 31 May 2025 20:55:20 +0000 (0:00:00.249) 0:00:23.207 ********** 2025-05-31 20:56:04.287311 | orchestrator | 2025-05-31 20:56:04.287322 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-05-31 20:56:04.287333 | orchestrator | Saturday 31 May 2025 20:55:21 +0000 (0:00:00.791) 0:00:23.999 ********** 2025-05-31 20:56:04.287343 | orchestrator | changed: [testbed-node-0] 2025-05-31 20:56:04.287354 | orchestrator | changed: [testbed-node-3] 2025-05-31 20:56:04.287364 | orchestrator | changed: [testbed-node-2] 2025-05-31 20:56:04.287375 | orchestrator | changed: [testbed-node-4] 2025-05-31 20:56:04.287386 | orchestrator | changed: [testbed-node-5] 2025-05-31 20:56:04.287396 | orchestrator | changed: [testbed-node-1] 2025-05-31 20:56:04.287407 | orchestrator | 2025-05-31 20:56:04.287417 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-05-31 20:56:04.287428 | orchestrator | Saturday 31 May 2025 20:55:28 +0000 (0:00:06.773) 0:00:30.772 ********** 2025-05-31 20:56:04.287439 | orchestrator | ok: [testbed-node-0] 2025-05-31 20:56:04.287450 | orchestrator | ok: [testbed-node-1] 2025-05-31 20:56:04.287460 | orchestrator | ok: [testbed-node-2] 2025-05-31 20:56:04.287471 | orchestrator | ok: [testbed-node-3] 2025-05-31 20:56:04.287481 | orchestrator | ok: [testbed-node-4] 2025-05-31 20:56:04.287492 | orchestrator | ok: [testbed-node-5] 2025-05-31 20:56:04.287502 | orchestrator | 2025-05-31 20:56:04.287513 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-05-31 20:56:04.287524 | orchestrator | Saturday 31 May 2025 20:55:30 +0000 (0:00:01.713) 0:00:32.486 ********** 2025-05-31 20:56:04.287534 | orchestrator | changed: [testbed-node-1] 2025-05-31 20:56:04.287545 | orchestrator | changed: [testbed-node-5] 2025-05-31 20:56:04.287556 | orchestrator | changed: [testbed-node-0] 2025-05-31 20:56:04.287566 | orchestrator | changed: [testbed-node-4] 2025-05-31 20:56:04.287577 | orchestrator | changed: [testbed-node-3] 2025-05-31 20:56:04.287588 | orchestrator | changed: [testbed-node-2] 2025-05-31 20:56:04.287598 | orchestrator | 2025-05-31 20:56:04.287609 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-05-31 20:56:04.287619 | orchestrator | Saturday 31 May 2025 20:55:40 +0000 (0:00:09.902) 0:00:42.388 ********** 2025-05-31 20:56:04.287630 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-05-31 20:56:04.287641 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-05-31 20:56:04.287652 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-05-31 20:56:04.287669 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-05-31 20:56:04.287680 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-05-31 20:56:04.287752 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-05-31 20:56:04.287764 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-05-31 20:56:04.287774 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-05-31 20:56:04.287785 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-05-31 20:56:04.287796 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-05-31 20:56:04.287807 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-05-31 20:56:04.287816 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-05-31 20:56:04.287826 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-31 20:56:04.287835 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-31 20:56:04.287849 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-31 20:56:04.287859 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-31 20:56:04.287868 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-31 20:56:04.287877 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-31 20:56:04.287887 | orchestrator | 2025-05-31 20:56:04.287896 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-05-31 20:56:04.287906 | orchestrator | Saturday 31 May 2025 20:55:47 +0000 (0:00:07.710) 0:00:50.099 ********** 2025-05-31 20:56:04.287916 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-05-31 20:56:04.287925 | orchestrator | skipping: [testbed-node-3] 2025-05-31 20:56:04.287935 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-05-31 20:56:04.287944 | orchestrator | skipping: [testbed-node-4] 2025-05-31 20:56:04.287953 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-05-31 20:56:04.287963 | orchestrator | skipping: [testbed-node-5] 2025-05-31 20:56:04.287972 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-05-31 20:56:04.287982 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-05-31 20:56:04.287992 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-05-31 20:56:04.288001 | orchestrator | 2025-05-31 20:56:04.288011 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-05-31 20:56:04.288020 | orchestrator | Saturday 31 May 2025 20:55:50 +0000 (0:00:02.318) 0:00:52.417 ********** 2025-05-31 20:56:04.288030 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-05-31 20:56:04.288039 | orchestrator | skipping: [testbed-node-3] 2025-05-31 20:56:04.288048 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-05-31 20:56:04.288058 | orchestrator | skipping: [testbed-node-4] 2025-05-31 20:56:04.288067 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-05-31 20:56:04.288077 | orchestrator | skipping: [testbed-node-5] 2025-05-31 20:56:04.288086 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-05-31 20:56:04.288095 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-05-31 20:56:04.288112 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-05-31 20:56:04.288122 | orchestrator | 2025-05-31 20:56:04.288131 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-05-31 20:56:04.288140 | orchestrator | Saturday 31 May 2025 20:55:53 +0000 (0:00:03.710) 0:00:56.128 ********** 2025-05-31 20:56:04.288150 | orchestrator | changed: [testbed-node-0] 2025-05-31 20:56:04.288159 | orchestrator | changed: [testbed-node-1] 2025-05-31 20:56:04.288168 | orchestrator | changed: [testbed-node-3] 2025-05-31 20:56:04.288178 | orchestrator | changed: [testbed-node-2] 2025-05-31 20:56:04.288187 | orchestrator | changed: [testbed-node-4] 2025-05-31 20:56:04.288197 | orchestrator | changed: [testbed-node-5] 2025-05-31 20:56:04.288206 | orchestrator | 2025-05-31 20:56:04.288215 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 20:56:04.288225 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-31 20:56:04.288235 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-31 20:56:04.288245 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-31 20:56:04.288255 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-31 20:56:04.288264 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-31 20:56:04.288279 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-31 20:56:04.288289 | orchestrator | 2025-05-31 20:56:04.288299 | orchestrator | 2025-05-31 20:56:04.288308 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-31 20:56:04.288318 | orchestrator | Saturday 31 May 2025 20:56:02 +0000 (0:00:08.424) 0:01:04.553 ********** 2025-05-31 20:56:04.288328 | orchestrator | =============================================================================== 2025-05-31 20:56:04.288337 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 18.32s 2025-05-31 20:56:04.288346 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.71s 2025-05-31 20:56:04.288356 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 6.77s 2025-05-31 20:56:04.288365 | orchestrator | openvswitch : Copying over config.json files for services --------------- 4.73s 2025-05-31 20:56:04.288374 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.71s 2025-05-31 20:56:04.288383 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 2.65s 2025-05-31 20:56:04.288393 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.32s 2025-05-31 20:56:04.288402 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 2.23s 2025-05-31 20:56:04.288415 | orchestrator | openvswitch : include_tasks --------------------------------------------- 2.13s 2025-05-31 20:56:04.288425 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.90s 2025-05-31 20:56:04.288434 | orchestrator | module-load : Load modules ---------------------------------------------- 1.88s 2025-05-31 20:56:04.288444 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.84s 2025-05-31 20:56:04.288453 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.77s 2025-05-31 20:56:04.288462 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.72s 2025-05-31 20:56:04.288471 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.54s 2025-05-31 20:56:04.288488 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.31s 2025-05-31 20:56:04.288497 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.82s 2025-05-31 20:56:04.288507 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.80s 2025-05-31 20:56:04.288516 | orchestrator | 2025-05-31 20:56:04 | INFO  | Task b964d5a7-5bef-4a3e-97e3-ff548ef05d1f is in state STARTED 2025-05-31 20:56:04.288526 | orchestrator | 2025-05-31 20:56:04 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:56:04.288535 | orchestrator | 2025-05-31 20:56:04 | INFO  | Task 548ac3b1-5047-417d-b1ba-517bdaea8bf1 is in state STARTED 2025-05-31 20:56:04.288545 | orchestrator | 2025-05-31 20:56:04 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:56:07.334646 | orchestrator | 2025-05-31 20:56:07 | INFO  | Task fd314325-f586-41be-8318-ba939bda8e1c is in state STARTED 2025-05-31 20:56:07.334903 | orchestrator | 2025-05-31 20:56:07 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:56:07.334928 | orchestrator | 2025-05-31 20:56:07 | INFO  | Task b964d5a7-5bef-4a3e-97e3-ff548ef05d1f is in state STARTED 2025-05-31 20:56:07.334941 | orchestrator | 2025-05-31 20:56:07 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:56:07.335046 | orchestrator | 2025-05-31 20:56:07 | INFO  | Task 548ac3b1-5047-417d-b1ba-517bdaea8bf1 is in state STARTED 2025-05-31 20:56:07.335062 | orchestrator | 2025-05-31 20:56:07 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:56:10.384832 | orchestrator | 2025-05-31 20:56:10 | INFO  | Task fd314325-f586-41be-8318-ba939bda8e1c is in state STARTED 2025-05-31 20:56:10.385017 | orchestrator | 2025-05-31 20:56:10 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:56:10.386962 | orchestrator | 2025-05-31 20:56:10 | INFO  | Task b964d5a7-5bef-4a3e-97e3-ff548ef05d1f is in state STARTED 2025-05-31 20:56:10.387961 | orchestrator | 2025-05-31 20:56:10 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:56:10.388616 | orchestrator | 2025-05-31 20:56:10 | INFO  | Task 548ac3b1-5047-417d-b1ba-517bdaea8bf1 is in state STARTED 2025-05-31 20:56:10.388770 | orchestrator | 2025-05-31 20:56:10 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:56:13.441117 | orchestrator | 2025-05-31 20:56:13 | INFO  | Task fd314325-f586-41be-8318-ba939bda8e1c is in state STARTED 2025-05-31 20:56:13.443369 | orchestrator | 2025-05-31 20:56:13 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:56:13.443422 | orchestrator | 2025-05-31 20:56:13 | INFO  | Task b964d5a7-5bef-4a3e-97e3-ff548ef05d1f is in state STARTED 2025-05-31 20:56:13.445678 | orchestrator | 2025-05-31 20:56:13 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:56:13.446591 | orchestrator | 2025-05-31 20:56:13 | INFO  | Task 548ac3b1-5047-417d-b1ba-517bdaea8bf1 is in state STARTED 2025-05-31 20:56:13.446732 | orchestrator | 2025-05-31 20:56:13 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:56:16.486174 | orchestrator | 2025-05-31 20:56:16 | INFO  | Task fd314325-f586-41be-8318-ba939bda8e1c is in state STARTED 2025-05-31 20:56:16.487896 | orchestrator | 2025-05-31 20:56:16 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:56:16.488902 | orchestrator | 2025-05-31 20:56:16 | INFO  | Task b964d5a7-5bef-4a3e-97e3-ff548ef05d1f is in state STARTED 2025-05-31 20:56:16.490481 | orchestrator | 2025-05-31 20:56:16 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:56:16.491411 | orchestrator | 2025-05-31 20:56:16 | INFO  | Task 548ac3b1-5047-417d-b1ba-517bdaea8bf1 is in state STARTED 2025-05-31 20:56:16.491442 | orchestrator | 2025-05-31 20:56:16 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:56:19.542884 | orchestrator | 2025-05-31 20:56:19 | INFO  | Task fd314325-f586-41be-8318-ba939bda8e1c is in state STARTED 2025-05-31 20:56:19.548502 | orchestrator | 2025-05-31 20:56:19 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:56:19.549024 | orchestrator | 2025-05-31 20:56:19 | INFO  | Task b964d5a7-5bef-4a3e-97e3-ff548ef05d1f is in state STARTED 2025-05-31 20:56:19.551762 | orchestrator | 2025-05-31 20:56:19 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:56:19.555368 | orchestrator | 2025-05-31 20:56:19 | INFO  | Task 548ac3b1-5047-417d-b1ba-517bdaea8bf1 is in state STARTED 2025-05-31 20:56:19.555419 | orchestrator | 2025-05-31 20:56:19 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:56:22.586559 | orchestrator | 2025-05-31 20:56:22 | INFO  | Task fd314325-f586-41be-8318-ba939bda8e1c is in state STARTED 2025-05-31 20:56:22.587010 | orchestrator | 2025-05-31 20:56:22 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:56:22.587575 | orchestrator | 2025-05-31 20:56:22 | INFO  | Task b964d5a7-5bef-4a3e-97e3-ff548ef05d1f is in state STARTED 2025-05-31 20:56:22.588178 | orchestrator | 2025-05-31 20:56:22 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:56:22.588825 | orchestrator | 2025-05-31 20:56:22 | INFO  | Task 548ac3b1-5047-417d-b1ba-517bdaea8bf1 is in state STARTED 2025-05-31 20:56:22.588867 | orchestrator | 2025-05-31 20:56:22 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:56:25.638846 | orchestrator | 2025-05-31 20:56:25 | INFO  | Task fd314325-f586-41be-8318-ba939bda8e1c is in state STARTED 2025-05-31 20:56:25.642990 | orchestrator | 2025-05-31 20:56:25 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:56:25.644996 | orchestrator | 2025-05-31 20:56:25 | INFO  | Task b964d5a7-5bef-4a3e-97e3-ff548ef05d1f is in state STARTED 2025-05-31 20:56:25.647071 | orchestrator | 2025-05-31 20:56:25 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:56:25.648376 | orchestrator | 2025-05-31 20:56:25 | INFO  | Task 548ac3b1-5047-417d-b1ba-517bdaea8bf1 is in state STARTED 2025-05-31 20:56:25.648398 | orchestrator | 2025-05-31 20:56:25 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:56:28.689215 | orchestrator | 2025-05-31 20:56:28 | INFO  | Task fd314325-f586-41be-8318-ba939bda8e1c is in state STARTED 2025-05-31 20:56:28.689344 | orchestrator | 2025-05-31 20:56:28 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:56:28.689836 | orchestrator | 2025-05-31 20:56:28 | INFO  | Task b964d5a7-5bef-4a3e-97e3-ff548ef05d1f is in state STARTED 2025-05-31 20:56:28.690482 | orchestrator | 2025-05-31 20:56:28 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:56:28.691122 | orchestrator | 2025-05-31 20:56:28 | INFO  | Task 548ac3b1-5047-417d-b1ba-517bdaea8bf1 is in state STARTED 2025-05-31 20:56:28.691145 | orchestrator | 2025-05-31 20:56:28 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:56:31.726622 | orchestrator | 2025-05-31 20:56:31 | INFO  | Task fd314325-f586-41be-8318-ba939bda8e1c is in state STARTED 2025-05-31 20:56:31.727668 | orchestrator | 2025-05-31 20:56:31 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:56:31.729042 | orchestrator | 2025-05-31 20:56:31 | INFO  | Task b964d5a7-5bef-4a3e-97e3-ff548ef05d1f is in state STARTED 2025-05-31 20:56:31.730693 | orchestrator | 2025-05-31 20:56:31 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:56:31.733338 | orchestrator | 2025-05-31 20:56:31 | INFO  | Task 548ac3b1-5047-417d-b1ba-517bdaea8bf1 is in state STARTED 2025-05-31 20:56:31.733398 | orchestrator | 2025-05-31 20:56:31 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:56:34.778677 | orchestrator | 2025-05-31 20:56:34 | INFO  | Task fd314325-f586-41be-8318-ba939bda8e1c is in state STARTED 2025-05-31 20:56:34.778928 | orchestrator | 2025-05-31 20:56:34 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:56:34.779497 | orchestrator | 2025-05-31 20:56:34 | INFO  | Task b964d5a7-5bef-4a3e-97e3-ff548ef05d1f is in state STARTED 2025-05-31 20:56:34.780533 | orchestrator | 2025-05-31 20:56:34 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:56:34.780917 | orchestrator | 2025-05-31 20:56:34 | INFO  | Task 548ac3b1-5047-417d-b1ba-517bdaea8bf1 is in state STARTED 2025-05-31 20:56:34.781047 | orchestrator | 2025-05-31 20:56:34 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:56:37.813440 | orchestrator | 2025-05-31 20:56:37 | INFO  | Task fd314325-f586-41be-8318-ba939bda8e1c is in state STARTED 2025-05-31 20:56:37.813549 | orchestrator | 2025-05-31 20:56:37 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:56:37.813932 | orchestrator | 2025-05-31 20:56:37 | INFO  | Task b964d5a7-5bef-4a3e-97e3-ff548ef05d1f is in state STARTED 2025-05-31 20:56:37.814796 | orchestrator | 2025-05-31 20:56:37 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:56:37.815554 | orchestrator | 2025-05-31 20:56:37 | INFO  | Task 548ac3b1-5047-417d-b1ba-517bdaea8bf1 is in state STARTED 2025-05-31 20:56:37.815594 | orchestrator | 2025-05-31 20:56:37 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:56:40.844057 | orchestrator | 2025-05-31 20:56:40 | INFO  | Task fd314325-f586-41be-8318-ba939bda8e1c is in state STARTED 2025-05-31 20:56:40.844237 | orchestrator | 2025-05-31 20:56:40 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:56:40.846171 | orchestrator | 2025-05-31 20:56:40 | INFO  | Task b964d5a7-5bef-4a3e-97e3-ff548ef05d1f is in state STARTED 2025-05-31 20:56:40.847571 | orchestrator | 2025-05-31 20:56:40 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:56:40.848256 | orchestrator | 2025-05-31 20:56:40 | INFO  | Task 8f5deb58-d4bf-4622-9d9c-940b52794a00 is in state STARTED 2025-05-31 20:56:40.851110 | orchestrator | 2025-05-31 20:56:40 | INFO  | Task 548ac3b1-5047-417d-b1ba-517bdaea8bf1 is in state SUCCESS 2025-05-31 20:56:40.852490 | orchestrator | 2025-05-31 20:56:40.852520 | orchestrator | 2025-05-31 20:56:40.852532 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2025-05-31 20:56:40.852544 | orchestrator | 2025-05-31 20:56:40.852556 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2025-05-31 20:56:40.852568 | orchestrator | Saturday 31 May 2025 20:52:21 +0000 (0:00:00.239) 0:00:00.239 ********** 2025-05-31 20:56:40.852579 | orchestrator | ok: [testbed-node-3] 2025-05-31 20:56:40.852591 | orchestrator | ok: [testbed-node-4] 2025-05-31 20:56:40.852602 | orchestrator | ok: [testbed-node-5] 2025-05-31 20:56:40.852613 | orchestrator | ok: [testbed-node-0] 2025-05-31 20:56:40.852624 | orchestrator | ok: [testbed-node-1] 2025-05-31 20:56:40.852634 | orchestrator | ok: [testbed-node-2] 2025-05-31 20:56:40.852645 | orchestrator | 2025-05-31 20:56:40.852656 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2025-05-31 20:56:40.852692 | orchestrator | Saturday 31 May 2025 20:52:22 +0000 (0:00:00.908) 0:00:01.147 ********** 2025-05-31 20:56:40.852703 | orchestrator | skipping: [testbed-node-3] 2025-05-31 20:56:40.852714 | orchestrator | skipping: [testbed-node-4] 2025-05-31 20:56:40.852725 | orchestrator | skipping: [testbed-node-5] 2025-05-31 20:56:40.852736 | orchestrator | skipping: [testbed-node-0] 2025-05-31 20:56:40.852813 | orchestrator | skipping: [testbed-node-1] 2025-05-31 20:56:40.852825 | orchestrator | skipping: [testbed-node-2] 2025-05-31 20:56:40.852836 | orchestrator | 2025-05-31 20:56:40.852847 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2025-05-31 20:56:40.852859 | orchestrator | Saturday 31 May 2025 20:52:23 +0000 (0:00:00.858) 0:00:02.006 ********** 2025-05-31 20:56:40.852870 | orchestrator | skipping: [testbed-node-3] 2025-05-31 20:56:40.852880 | orchestrator | skipping: [testbed-node-4] 2025-05-31 20:56:40.852891 | orchestrator | skipping: [testbed-node-5] 2025-05-31 20:56:40.852902 | orchestrator | skipping: [testbed-node-0] 2025-05-31 20:56:40.852913 | orchestrator | skipping: [testbed-node-1] 2025-05-31 20:56:40.852923 | orchestrator | skipping: [testbed-node-2] 2025-05-31 20:56:40.852934 | orchestrator | 2025-05-31 20:56:40.852981 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2025-05-31 20:56:40.852994 | orchestrator | Saturday 31 May 2025 20:52:24 +0000 (0:00:00.916) 0:00:02.923 ********** 2025-05-31 20:56:40.853050 | orchestrator | changed: [testbed-node-3] 2025-05-31 20:56:40.853064 | orchestrator | changed: [testbed-node-5] 2025-05-31 20:56:40.853074 | orchestrator | changed: [testbed-node-4] 2025-05-31 20:56:40.853086 | orchestrator | changed: [testbed-node-1] 2025-05-31 20:56:40.853097 | orchestrator | changed: [testbed-node-2] 2025-05-31 20:56:40.853107 | orchestrator | changed: [testbed-node-0] 2025-05-31 20:56:40.853118 | orchestrator | 2025-05-31 20:56:40.853129 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2025-05-31 20:56:40.853140 | orchestrator | Saturday 31 May 2025 20:52:27 +0000 (0:00:02.681) 0:00:05.604 ********** 2025-05-31 20:56:40.853150 | orchestrator | changed: [testbed-node-3] 2025-05-31 20:56:40.853161 | orchestrator | changed: [testbed-node-4] 2025-05-31 20:56:40.853171 | orchestrator | changed: [testbed-node-5] 2025-05-31 20:56:40.853181 | orchestrator | changed: [testbed-node-0] 2025-05-31 20:56:40.853192 | orchestrator | changed: [testbed-node-1] 2025-05-31 20:56:40.853202 | orchestrator | changed: [testbed-node-2] 2025-05-31 20:56:40.853213 | orchestrator | 2025-05-31 20:56:40.853223 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2025-05-31 20:56:40.853234 | orchestrator | Saturday 31 May 2025 20:52:28 +0000 (0:00:01.111) 0:00:06.715 ********** 2025-05-31 20:56:40.853245 | orchestrator | changed: [testbed-node-3] 2025-05-31 20:56:40.853255 | orchestrator | changed: [testbed-node-4] 2025-05-31 20:56:40.853265 | orchestrator | changed: [testbed-node-5] 2025-05-31 20:56:40.853276 | orchestrator | changed: [testbed-node-0] 2025-05-31 20:56:40.853286 | orchestrator | changed: [testbed-node-1] 2025-05-31 20:56:40.853297 | orchestrator | changed: [testbed-node-2] 2025-05-31 20:56:40.853307 | orchestrator | 2025-05-31 20:56:40.853318 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2025-05-31 20:56:40.853328 | orchestrator | Saturday 31 May 2025 20:52:29 +0000 (0:00:01.154) 0:00:07.870 ********** 2025-05-31 20:56:40.853339 | orchestrator | skipping: [testbed-node-3] 2025-05-31 20:56:40.853350 | orchestrator | skipping: [testbed-node-4] 2025-05-31 20:56:40.853360 | orchestrator | skipping: [testbed-node-5] 2025-05-31 20:56:40.853385 | orchestrator | skipping: [testbed-node-0] 2025-05-31 20:56:40.853395 | orchestrator | skipping: [testbed-node-1] 2025-05-31 20:56:40.853406 | orchestrator | skipping: [testbed-node-2] 2025-05-31 20:56:40.853416 | orchestrator | 2025-05-31 20:56:40.853427 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2025-05-31 20:56:40.853438 | orchestrator | Saturday 31 May 2025 20:52:30 +0000 (0:00:00.860) 0:00:08.731 ********** 2025-05-31 20:56:40.853448 | orchestrator | skipping: [testbed-node-3] 2025-05-31 20:56:40.853459 | orchestrator | skipping: [testbed-node-4] 2025-05-31 20:56:40.853478 | orchestrator | skipping: [testbed-node-5] 2025-05-31 20:56:40.853489 | orchestrator | skipping: [testbed-node-0] 2025-05-31 20:56:40.853499 | orchestrator | skipping: [testbed-node-1] 2025-05-31 20:56:40.853510 | orchestrator | skipping: [testbed-node-2] 2025-05-31 20:56:40.853520 | orchestrator | 2025-05-31 20:56:40.853531 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2025-05-31 20:56:40.853541 | orchestrator | Saturday 31 May 2025 20:52:31 +0000 (0:00:00.739) 0:00:09.471 ********** 2025-05-31 20:56:40.853552 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-31 20:56:40.853562 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-31 20:56:40.853573 | orchestrator | skipping: [testbed-node-3] 2025-05-31 20:56:40.853584 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-31 20:56:40.853594 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-31 20:56:40.853605 | orchestrator | skipping: [testbed-node-4] 2025-05-31 20:56:40.853615 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-31 20:56:40.853626 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-31 20:56:40.853637 | orchestrator | skipping: [testbed-node-5] 2025-05-31 20:56:40.853647 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-31 20:56:40.853672 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-31 20:56:40.853683 | orchestrator | skipping: [testbed-node-0] 2025-05-31 20:56:40.853694 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-31 20:56:40.853704 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-31 20:56:40.853715 | orchestrator | skipping: [testbed-node-1] 2025-05-31 20:56:40.853726 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-31 20:56:40.853736 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-31 20:56:40.853765 | orchestrator | skipping: [testbed-node-2] 2025-05-31 20:56:40.853776 | orchestrator | 2025-05-31 20:56:40.853787 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2025-05-31 20:56:40.853798 | orchestrator | Saturday 31 May 2025 20:52:32 +0000 (0:00:01.024) 0:00:10.495 ********** 2025-05-31 20:56:40.853808 | orchestrator | skipping: [testbed-node-3] 2025-05-31 20:56:40.853819 | orchestrator | skipping: [testbed-node-4] 2025-05-31 20:56:40.853830 | orchestrator | skipping: [testbed-node-5] 2025-05-31 20:56:40.853840 | orchestrator | skipping: [testbed-node-0] 2025-05-31 20:56:40.853851 | orchestrator | skipping: [testbed-node-1] 2025-05-31 20:56:40.853862 | orchestrator | skipping: [testbed-node-2] 2025-05-31 20:56:40.853872 | orchestrator | 2025-05-31 20:56:40.853883 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2025-05-31 20:56:40.853894 | orchestrator | Saturday 31 May 2025 20:52:33 +0000 (0:00:01.319) 0:00:11.815 ********** 2025-05-31 20:56:40.853905 | orchestrator | ok: [testbed-node-3] 2025-05-31 20:56:40.853916 | orchestrator | ok: [testbed-node-4] 2025-05-31 20:56:40.853926 | orchestrator | ok: [testbed-node-5] 2025-05-31 20:56:40.853936 | orchestrator | ok: [testbed-node-0] 2025-05-31 20:56:40.853947 | orchestrator | ok: [testbed-node-1] 2025-05-31 20:56:40.853957 | orchestrator | ok: [testbed-node-2] 2025-05-31 20:56:40.853968 | orchestrator | 2025-05-31 20:56:40.853978 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2025-05-31 20:56:40.853989 | orchestrator | Saturday 31 May 2025 20:52:33 +0000 (0:00:00.551) 0:00:12.367 ********** 2025-05-31 20:56:40.854000 | orchestrator | changed: [testbed-node-5] 2025-05-31 20:56:40.854010 | orchestrator | changed: [testbed-node-2] 2025-05-31 20:56:40.854067 | orchestrator | changed: [testbed-node-1] 2025-05-31 20:56:40.854079 | orchestrator | changed: [testbed-node-3] 2025-05-31 20:56:40.854097 | orchestrator | changed: [testbed-node-0] 2025-05-31 20:56:40.854107 | orchestrator | changed: [testbed-node-4] 2025-05-31 20:56:40.854118 | orchestrator | 2025-05-31 20:56:40.854129 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2025-05-31 20:56:40.854140 | orchestrator | Saturday 31 May 2025 20:52:40 +0000 (0:00:06.545) 0:00:18.913 ********** 2025-05-31 20:56:40.854150 | orchestrator | skipping: [testbed-node-3] 2025-05-31 20:56:40.854161 | orchestrator | skipping: [testbed-node-4] 2025-05-31 20:56:40.854172 | orchestrator | skipping: [testbed-node-5] 2025-05-31 20:56:40.854182 | orchestrator | skipping: [testbed-node-0] 2025-05-31 20:56:40.854193 | orchestrator | skipping: [testbed-node-1] 2025-05-31 20:56:40.854204 | orchestrator | skipping: [testbed-node-2] 2025-05-31 20:56:40.854214 | orchestrator | 2025-05-31 20:56:40.854225 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2025-05-31 20:56:40.854236 | orchestrator | Saturday 31 May 2025 20:52:41 +0000 (0:00:01.080) 0:00:19.993 ********** 2025-05-31 20:56:40.854246 | orchestrator | skipping: [testbed-node-3] 2025-05-31 20:56:40.854257 | orchestrator | skipping: [testbed-node-4] 2025-05-31 20:56:40.854268 | orchestrator | skipping: [testbed-node-5] 2025-05-31 20:56:40.854279 | orchestrator | skipping: [testbed-node-0] 2025-05-31 20:56:40.854292 | orchestrator | skipping: [testbed-node-1] 2025-05-31 20:56:40.854311 | orchestrator | skipping: [testbed-node-2] 2025-05-31 20:56:40.854332 | orchestrator | 2025-05-31 20:56:40.854352 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2025-05-31 20:56:40.854372 | orchestrator | Saturday 31 May 2025 20:52:43 +0000 (0:00:01.983) 0:00:21.977 ********** 2025-05-31 20:56:40.854390 | orchestrator | skipping: [testbed-node-3] 2025-05-31 20:56:40.854415 | orchestrator | skipping: [testbed-node-4] 2025-05-31 20:56:40.854434 | orchestrator | skipping: [testbed-node-5] 2025-05-31 20:56:40.854455 | orchestrator | skipping: [testbed-node-0] 2025-05-31 20:56:40.854471 | orchestrator | skipping: [testbed-node-1] 2025-05-31 20:56:40.854482 | orchestrator | skipping: [testbed-node-2] 2025-05-31 20:56:40.854492 | orchestrator | 2025-05-31 20:56:40.854503 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2025-05-31 20:56:40.854514 | orchestrator | Saturday 31 May 2025 20:52:44 +0000 (0:00:01.010) 0:00:22.988 ********** 2025-05-31 20:56:40.854525 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2025-05-31 20:56:40.854536 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2025-05-31 20:56:40.854546 | orchestrator | skipping: [testbed-node-3] 2025-05-31 20:56:40.854557 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2025-05-31 20:56:40.854567 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2025-05-31 20:56:40.854578 | orchestrator | skipping: [testbed-node-4] 2025-05-31 20:56:40.854589 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2025-05-31 20:56:40.854600 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2025-05-31 20:56:40.854610 | orchestrator | skipping: [testbed-node-5] 2025-05-31 20:56:40.854621 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2025-05-31 20:56:40.854631 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2025-05-31 20:56:40.854642 | orchestrator | skipping: [testbed-node-0] 2025-05-31 20:56:40.854653 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2025-05-31 20:56:40.854664 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2025-05-31 20:56:40.854674 | orchestrator | skipping: [testbed-node-1] 2025-05-31 20:56:40.854685 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2025-05-31 20:56:40.854695 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2025-05-31 20:56:40.854706 | orchestrator | skipping: [testbed-node-2] 2025-05-31 20:56:40.854716 | orchestrator | 2025-05-31 20:56:40.854727 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2025-05-31 20:56:40.854769 | orchestrator | Saturday 31 May 2025 20:52:45 +0000 (0:00:01.071) 0:00:24.059 ********** 2025-05-31 20:56:40.854790 | orchestrator | skipping: [testbed-node-3] 2025-05-31 20:56:40.854800 | orchestrator | skipping: [testbed-node-4] 2025-05-31 20:56:40.854811 | orchestrator | skipping: [testbed-node-5] 2025-05-31 20:56:40.854821 | orchestrator | skipping: [testbed-node-0] 2025-05-31 20:56:40.854832 | orchestrator | skipping: [testbed-node-1] 2025-05-31 20:56:40.854842 | orchestrator | skipping: [testbed-node-2] 2025-05-31 20:56:40.854853 | orchestrator | 2025-05-31 20:56:40.854863 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2025-05-31 20:56:40.854874 | orchestrator | 2025-05-31 20:56:40.854908 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2025-05-31 20:56:40.854920 | orchestrator | Saturday 31 May 2025 20:52:46 +0000 (0:00:01.234) 0:00:25.294 ********** 2025-05-31 20:56:40.854930 | orchestrator | ok: [testbed-node-0] 2025-05-31 20:56:40.854941 | orchestrator | ok: [testbed-node-2] 2025-05-31 20:56:40.854951 | orchestrator | ok: [testbed-node-1] 2025-05-31 20:56:40.854962 | orchestrator | 2025-05-31 20:56:40.854972 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2025-05-31 20:56:40.854983 | orchestrator | Saturday 31 May 2025 20:52:48 +0000 (0:00:01.346) 0:00:26.640 ********** 2025-05-31 20:56:40.854994 | orchestrator | ok: [testbed-node-2] 2025-05-31 20:56:40.855004 | orchestrator | ok: [testbed-node-0] 2025-05-31 20:56:40.855015 | orchestrator | ok: [testbed-node-1] 2025-05-31 20:56:40.855025 | orchestrator | 2025-05-31 20:56:40.855036 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2025-05-31 20:56:40.855046 | orchestrator | Saturday 31 May 2025 20:52:49 +0000 (0:00:01.074) 0:00:27.715 ********** 2025-05-31 20:56:40.855057 | orchestrator | ok: [testbed-node-0] 2025-05-31 20:56:40.855067 | orchestrator | ok: [testbed-node-1] 2025-05-31 20:56:40.855078 | orchestrator | ok: [testbed-node-2] 2025-05-31 20:56:40.855088 | orchestrator | 2025-05-31 20:56:40.855099 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2025-05-31 20:56:40.855109 | orchestrator | Saturday 31 May 2025 20:52:50 +0000 (0:00:01.059) 0:00:28.775 ********** 2025-05-31 20:56:40.855120 | orchestrator | ok: [testbed-node-2] 2025-05-31 20:56:40.855131 | orchestrator | ok: [testbed-node-1] 2025-05-31 20:56:40.855141 | orchestrator | ok: [testbed-node-0] 2025-05-31 20:56:40.855152 | orchestrator | 2025-05-31 20:56:40.855162 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2025-05-31 20:56:40.855173 | orchestrator | Saturday 31 May 2025 20:52:51 +0000 (0:00:00.811) 0:00:29.586 ********** 2025-05-31 20:56:40.855184 | orchestrator | skipping: [testbed-node-0] 2025-05-31 20:56:40.855194 | orchestrator | skipping: [testbed-node-1] 2025-05-31 20:56:40.855205 | orchestrator | skipping: [testbed-node-2] 2025-05-31 20:56:40.855215 | orchestrator | 2025-05-31 20:56:40.855226 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2025-05-31 20:56:40.855237 | orchestrator | Saturday 31 May 2025 20:52:51 +0000 (0:00:00.511) 0:00:30.097 ********** 2025-05-31 20:56:40.855247 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 20:56:40.855258 | orchestrator | 2025-05-31 20:56:40.855269 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2025-05-31 20:56:40.855279 | orchestrator | Saturday 31 May 2025 20:52:52 +0000 (0:00:00.851) 0:00:30.948 ********** 2025-05-31 20:56:40.855290 | orchestrator | ok: [testbed-node-0] 2025-05-31 20:56:40.855301 | orchestrator | ok: [testbed-node-2] 2025-05-31 20:56:40.855311 | orchestrator | ok: [testbed-node-1] 2025-05-31 20:56:40.855322 | orchestrator | 2025-05-31 20:56:40.855332 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2025-05-31 20:56:40.855343 | orchestrator | Saturday 31 May 2025 20:52:55 +0000 (0:00:02.666) 0:00:33.615 ********** 2025-05-31 20:56:40.855353 | orchestrator | skipping: [testbed-node-1] 2025-05-31 20:56:40.855364 | orchestrator | skipping: [testbed-node-2] 2025-05-31 20:56:40.855374 | orchestrator | changed: [testbed-node-0] 2025-05-31 20:56:40.855385 | orchestrator | 2025-05-31 20:56:40.855403 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2025-05-31 20:56:40.855413 | orchestrator | Saturday 31 May 2025 20:52:56 +0000 (0:00:00.887) 0:00:34.502 ********** 2025-05-31 20:56:40.855424 | orchestrator | skipping: [testbed-node-1] 2025-05-31 20:56:40.855440 | orchestrator | skipping: [testbed-node-2] 2025-05-31 20:56:40.855452 | orchestrator | changed: [testbed-node-0] 2025-05-31 20:56:40.855462 | orchestrator | 2025-05-31 20:56:40.855473 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2025-05-31 20:56:40.855484 | orchestrator | Saturday 31 May 2025 20:52:57 +0000 (0:00:01.052) 0:00:35.554 ********** 2025-05-31 20:56:40.855494 | orchestrator | skipping: [testbed-node-1] 2025-05-31 20:56:40.855505 | orchestrator | skipping: [testbed-node-2] 2025-05-31 20:56:40.855515 | orchestrator | changed: [testbed-node-0] 2025-05-31 20:56:40.855526 | orchestrator | 2025-05-31 20:56:40.855536 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2025-05-31 20:56:40.855547 | orchestrator | Saturday 31 May 2025 20:52:59 +0000 (0:00:02.551) 0:00:38.105 ********** 2025-05-31 20:56:40.855558 | orchestrator | skipping: [testbed-node-0] 2025-05-31 20:56:40.855569 | orchestrator | skipping: [testbed-node-1] 2025-05-31 20:56:40.855579 | orchestrator | skipping: [testbed-node-2] 2025-05-31 20:56:40.855590 | orchestrator | 2025-05-31 20:56:40.855600 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2025-05-31 20:56:40.855611 | orchestrator | Saturday 31 May 2025 20:53:00 +0000 (0:00:00.542) 0:00:38.648 ********** 2025-05-31 20:56:40.855622 | orchestrator | skipping: [testbed-node-0] 2025-05-31 20:56:40.855632 | orchestrator | skipping: [testbed-node-1] 2025-05-31 20:56:40.855643 | orchestrator | skipping: [testbed-node-2] 2025-05-31 20:56:40.855653 | orchestrator | 2025-05-31 20:56:40.855664 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2025-05-31 20:56:40.855674 | orchestrator | Saturday 31 May 2025 20:53:00 +0000 (0:00:00.475) 0:00:39.124 ********** 2025-05-31 20:56:40.855685 | orchestrator | changed: [testbed-node-0] 2025-05-31 20:56:40.855695 | orchestrator | changed: [testbed-node-1] 2025-05-31 20:56:40.855706 | orchestrator | changed: [testbed-node-2] 2025-05-31 20:56:40.855717 | orchestrator | 2025-05-31 20:56:40.855727 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2025-05-31 20:56:40.855738 | orchestrator | Saturday 31 May 2025 20:53:03 +0000 (0:00:02.374) 0:00:41.499 ********** 2025-05-31 20:56:40.855773 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-05-31 20:56:40.855784 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-05-31 20:56:40.855795 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-05-31 20:56:40.855806 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-05-31 20:56:40.855817 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-05-31 20:56:40.855828 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-05-31 20:56:40.855838 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-05-31 20:56:40.855849 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-05-31 20:56:40.855860 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-05-31 20:56:40.855870 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-05-31 20:56:40.855888 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-05-31 20:56:40.855899 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-05-31 20:56:40.855909 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-05-31 20:56:40.855921 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-05-31 20:56:40.855931 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-05-31 20:56:40.855942 | orchestrator | ok: [testbed-node-0] 2025-05-31 20:56:40.855953 | orchestrator | ok: [testbed-node-1] 2025-05-31 20:56:40.855963 | orchestrator | ok: [testbed-node-2] 2025-05-31 20:56:40.855974 | orchestrator | 2025-05-31 20:56:40.855985 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2025-05-31 20:56:40.855996 | orchestrator | Saturday 31 May 2025 20:53:59 +0000 (0:00:56.222) 0:01:37.722 ********** 2025-05-31 20:56:40.856007 | orchestrator | skipping: [testbed-node-0] 2025-05-31 20:56:40.856017 | orchestrator | skipping: [testbed-node-1] 2025-05-31 20:56:40.856028 | orchestrator | skipping: [testbed-node-2] 2025-05-31 20:56:40.856039 | orchestrator | 2025-05-31 20:56:40.856049 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2025-05-31 20:56:40.856060 | orchestrator | Saturday 31 May 2025 20:53:59 +0000 (0:00:00.368) 0:01:38.090 ********** 2025-05-31 20:56:40.856071 | orchestrator | changed: [testbed-node-0] 2025-05-31 20:56:40.856086 | orchestrator | changed: [testbed-node-1] 2025-05-31 20:56:40.856098 | orchestrator | changed: [testbed-node-2] 2025-05-31 20:56:40.856108 | orchestrator | 2025-05-31 20:56:40.856119 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2025-05-31 20:56:40.856130 | orchestrator | Saturday 31 May 2025 20:54:00 +0000 (0:00:01.058) 0:01:39.148 ********** 2025-05-31 20:56:40.856141 | orchestrator | changed: [testbed-node-0] 2025-05-31 20:56:40.856151 | orchestrator | changed: [testbed-node-1] 2025-05-31 20:56:40.856162 | orchestrator | changed: [testbed-node-2] 2025-05-31 20:56:40.856172 | orchestrator | 2025-05-31 20:56:40.856183 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2025-05-31 20:56:40.856193 | orchestrator | Saturday 31 May 2025 20:54:02 +0000 (0:00:01.664) 0:01:40.813 ********** 2025-05-31 20:56:40.856204 | orchestrator | changed: [testbed-node-2] 2025-05-31 20:56:40.856215 | orchestrator | changed: [testbed-node-0] 2025-05-31 20:56:40.856225 | orchestrator | changed: [testbed-node-1] 2025-05-31 20:56:40.856236 | orchestrator | 2025-05-31 20:56:40.856246 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2025-05-31 20:56:40.856257 | orchestrator | Saturday 31 May 2025 20:54:18 +0000 (0:00:15.872) 0:01:56.686 ********** 2025-05-31 20:56:40.856268 | orchestrator | ok: [testbed-node-1] 2025-05-31 20:56:40.856278 | orchestrator | ok: [testbed-node-0] 2025-05-31 20:56:40.856289 | orchestrator | ok: [testbed-node-2] 2025-05-31 20:56:40.856299 | orchestrator | 2025-05-31 20:56:40.856310 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2025-05-31 20:56:40.856321 | orchestrator | Saturday 31 May 2025 20:54:18 +0000 (0:00:00.616) 0:01:57.303 ********** 2025-05-31 20:56:40.856331 | orchestrator | ok: [testbed-node-0] 2025-05-31 20:56:40.856342 | orchestrator | ok: [testbed-node-1] 2025-05-31 20:56:40.856352 | orchestrator | ok: [testbed-node-2] 2025-05-31 20:56:40.856363 | orchestrator | 2025-05-31 20:56:40.856374 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2025-05-31 20:56:40.856384 | orchestrator | Saturday 31 May 2025 20:54:19 +0000 (0:00:00.620) 0:01:57.923 ********** 2025-05-31 20:56:40.856395 | orchestrator | changed: [testbed-node-0] 2025-05-31 20:56:40.856412 | orchestrator | changed: [testbed-node-1] 2025-05-31 20:56:40.856423 | orchestrator | changed: [testbed-node-2] 2025-05-31 20:56:40.856434 | orchestrator | 2025-05-31 20:56:40.856451 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2025-05-31 20:56:40.856462 | orchestrator | Saturday 31 May 2025 20:54:20 +0000 (0:00:00.609) 0:01:58.533 ********** 2025-05-31 20:56:40.856472 | orchestrator | ok: [testbed-node-1] 2025-05-31 20:56:40.856484 | orchestrator | ok: [testbed-node-0] 2025-05-31 20:56:40.856494 | orchestrator | ok: [testbed-node-2] 2025-05-31 20:56:40.856505 | orchestrator | 2025-05-31 20:56:40.856516 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2025-05-31 20:56:40.856526 | orchestrator | Saturday 31 May 2025 20:54:21 +0000 (0:00:00.908) 0:01:59.442 ********** 2025-05-31 20:56:40.856537 | orchestrator | ok: [testbed-node-0] 2025-05-31 20:56:40.856548 | orchestrator | ok: [testbed-node-1] 2025-05-31 20:56:40.856559 | orchestrator | ok: [testbed-node-2] 2025-05-31 20:56:40.856569 | orchestrator | 2025-05-31 20:56:40.856580 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2025-05-31 20:56:40.856591 | orchestrator | Saturday 31 May 2025 20:54:21 +0000 (0:00:00.276) 0:01:59.719 ********** 2025-05-31 20:56:40.856602 | orchestrator | changed: [testbed-node-0] 2025-05-31 20:56:40.856613 | orchestrator | changed: [testbed-node-1] 2025-05-31 20:56:40.856624 | orchestrator | changed: [testbed-node-2] 2025-05-31 20:56:40.856634 | orchestrator | 2025-05-31 20:56:40.856646 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2025-05-31 20:56:40.856657 | orchestrator | Saturday 31 May 2025 20:54:21 +0000 (0:00:00.588) 0:02:00.307 ********** 2025-05-31 20:56:40.856668 | orchestrator | changed: [testbed-node-0] 2025-05-31 20:56:40.856679 | orchestrator | changed: [testbed-node-1] 2025-05-31 20:56:40.856689 | orchestrator | changed: [testbed-node-2] 2025-05-31 20:56:40.856700 | orchestrator | 2025-05-31 20:56:40.856710 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2025-05-31 20:56:40.856721 | orchestrator | Saturday 31 May 2025 20:54:22 +0000 (0:00:00.611) 0:02:00.919 ********** 2025-05-31 20:56:40.856732 | orchestrator | changed: [testbed-node-0] 2025-05-31 20:56:40.856757 | orchestrator | changed: [testbed-node-1] 2025-05-31 20:56:40.856768 | orchestrator | changed: [testbed-node-2] 2025-05-31 20:56:40.856778 | orchestrator | 2025-05-31 20:56:40.856789 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2025-05-31 20:56:40.856800 | orchestrator | Saturday 31 May 2025 20:54:23 +0000 (0:00:01.111) 0:02:02.030 ********** 2025-05-31 20:56:40.856810 | orchestrator | changed: [testbed-node-0] 2025-05-31 20:56:40.856821 | orchestrator | changed: [testbed-node-1] 2025-05-31 20:56:40.856832 | orchestrator | changed: [testbed-node-2] 2025-05-31 20:56:40.856843 | orchestrator | 2025-05-31 20:56:40.856854 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2025-05-31 20:56:40.856864 | orchestrator | Saturday 31 May 2025 20:54:24 +0000 (0:00:00.788) 0:02:02.819 ********** 2025-05-31 20:56:40.856875 | orchestrator | skipping: [testbed-node-0] 2025-05-31 20:56:40.856886 | orchestrator | skipping: [testbed-node-1] 2025-05-31 20:56:40.856896 | orchestrator | skipping: [testbed-node-2] 2025-05-31 20:56:40.856907 | orchestrator | 2025-05-31 20:56:40.856918 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2025-05-31 20:56:40.856928 | orchestrator | Saturday 31 May 2025 20:54:24 +0000 (0:00:00.340) 0:02:03.160 ********** 2025-05-31 20:56:40.856939 | orchestrator | skipping: [testbed-node-0] 2025-05-31 20:56:40.856950 | orchestrator | skipping: [testbed-node-1] 2025-05-31 20:56:40.856960 | orchestrator | skipping: [testbed-node-2] 2025-05-31 20:56:40.856971 | orchestrator | 2025-05-31 20:56:40.856982 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2025-05-31 20:56:40.856992 | orchestrator | Saturday 31 May 2025 20:54:25 +0000 (0:00:00.280) 0:02:03.440 ********** 2025-05-31 20:56:40.857003 | orchestrator | ok: [testbed-node-1] 2025-05-31 20:56:40.857014 | orchestrator | ok: [testbed-node-0] 2025-05-31 20:56:40.857031 | orchestrator | ok: [testbed-node-2] 2025-05-31 20:56:40.857042 | orchestrator | 2025-05-31 20:56:40.857053 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2025-05-31 20:56:40.857063 | orchestrator | Saturday 31 May 2025 20:54:25 +0000 (0:00:00.929) 0:02:04.370 ********** 2025-05-31 20:56:40.857074 | orchestrator | ok: [testbed-node-0] 2025-05-31 20:56:40.857084 | orchestrator | ok: [testbed-node-1] 2025-05-31 20:56:40.857095 | orchestrator | ok: [testbed-node-2] 2025-05-31 20:56:40.857105 | orchestrator | 2025-05-31 20:56:40.857122 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2025-05-31 20:56:40.857133 | orchestrator | Saturday 31 May 2025 20:54:26 +0000 (0:00:00.585) 0:02:04.956 ********** 2025-05-31 20:56:40.857144 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-05-31 20:56:40.857155 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-05-31 20:56:40.857165 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-05-31 20:56:40.857176 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-05-31 20:56:40.857187 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-05-31 20:56:40.857197 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-05-31 20:56:40.857208 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-05-31 20:56:40.857219 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-05-31 20:56:40.857229 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-05-31 20:56:40.857240 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-05-31 20:56:40.857251 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2025-05-31 20:56:40.857262 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-05-31 20:56:40.857279 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-05-31 20:56:40.857290 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2025-05-31 20:56:40.857300 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-05-31 20:56:40.857312 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-05-31 20:56:40.857323 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-05-31 20:56:40.857333 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-05-31 20:56:40.857344 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-05-31 20:56:40.857355 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-05-31 20:56:40.857366 | orchestrator | 2025-05-31 20:56:40.857376 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2025-05-31 20:56:40.857387 | orchestrator | 2025-05-31 20:56:40.857398 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2025-05-31 20:56:40.857408 | orchestrator | Saturday 31 May 2025 20:54:29 +0000 (0:00:03.121) 0:02:08.077 ********** 2025-05-31 20:56:40.857419 | orchestrator | ok: [testbed-node-3] 2025-05-31 20:56:40.857430 | orchestrator | ok: [testbed-node-4] 2025-05-31 20:56:40.857440 | orchestrator | ok: [testbed-node-5] 2025-05-31 20:56:40.857451 | orchestrator | 2025-05-31 20:56:40.857462 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2025-05-31 20:56:40.857478 | orchestrator | Saturday 31 May 2025 20:54:30 +0000 (0:00:00.555) 0:02:08.633 ********** 2025-05-31 20:56:40.857489 | orchestrator | ok: [testbed-node-3] 2025-05-31 20:56:40.857499 | orchestrator | ok: [testbed-node-4] 2025-05-31 20:56:40.857510 | orchestrator | ok: [testbed-node-5] 2025-05-31 20:56:40.857520 | orchestrator | 2025-05-31 20:56:40.857531 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2025-05-31 20:56:40.857542 | orchestrator | Saturday 31 May 2025 20:54:30 +0000 (0:00:00.642) 0:02:09.276 ********** 2025-05-31 20:56:40.857552 | orchestrator | ok: [testbed-node-3] 2025-05-31 20:56:40.857563 | orchestrator | ok: [testbed-node-4] 2025-05-31 20:56:40.857573 | orchestrator | ok: [testbed-node-5] 2025-05-31 20:56:40.857584 | orchestrator | 2025-05-31 20:56:40.857594 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2025-05-31 20:56:40.857605 | orchestrator | Saturday 31 May 2025 20:54:31 +0000 (0:00:00.333) 0:02:09.609 ********** 2025-05-31 20:56:40.857616 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 20:56:40.857627 | orchestrator | 2025-05-31 20:56:40.857640 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2025-05-31 20:56:40.857659 | orchestrator | Saturday 31 May 2025 20:54:31 +0000 (0:00:00.636) 0:02:10.246 ********** 2025-05-31 20:56:40.857676 | orchestrator | skipping: [testbed-node-3] 2025-05-31 20:56:40.857696 | orchestrator | skipping: [testbed-node-4] 2025-05-31 20:56:40.857714 | orchestrator | skipping: [testbed-node-5] 2025-05-31 20:56:40.857734 | orchestrator | 2025-05-31 20:56:40.857785 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2025-05-31 20:56:40.857797 | orchestrator | Saturday 31 May 2025 20:54:32 +0000 (0:00:00.295) 0:02:10.541 ********** 2025-05-31 20:56:40.857808 | orchestrator | skipping: [testbed-node-3] 2025-05-31 20:56:40.857818 | orchestrator | skipping: [testbed-node-4] 2025-05-31 20:56:40.857829 | orchestrator | skipping: [testbed-node-5] 2025-05-31 20:56:40.857840 | orchestrator | 2025-05-31 20:56:40.857850 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2025-05-31 20:56:40.857861 | orchestrator | Saturday 31 May 2025 20:54:32 +0000 (0:00:00.318) 0:02:10.860 ********** 2025-05-31 20:56:40.857883 | orchestrator | skipping: [testbed-node-3] 2025-05-31 20:56:40.857894 | orchestrator | skipping: [testbed-node-4] 2025-05-31 20:56:40.857905 | orchestrator | skipping: [testbed-node-5] 2025-05-31 20:56:40.857916 | orchestrator | 2025-05-31 20:56:40.857927 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2025-05-31 20:56:40.857937 | orchestrator | Saturday 31 May 2025 20:54:32 +0000 (0:00:00.333) 0:02:11.194 ********** 2025-05-31 20:56:40.857948 | orchestrator | changed: [testbed-node-3] 2025-05-31 20:56:40.857959 | orchestrator | changed: [testbed-node-4] 2025-05-31 20:56:40.857969 | orchestrator | changed: [testbed-node-5] 2025-05-31 20:56:40.857980 | orchestrator | 2025-05-31 20:56:40.857990 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2025-05-31 20:56:40.858001 | orchestrator | Saturday 31 May 2025 20:54:34 +0000 (0:00:01.623) 0:02:12.817 ********** 2025-05-31 20:56:40.858012 | orchestrator | changed: [testbed-node-3] 2025-05-31 20:56:40.858052 | orchestrator | changed: [testbed-node-5] 2025-05-31 20:56:40.858063 | orchestrator | changed: [testbed-node-4] 2025-05-31 20:56:40.858074 | orchestrator | 2025-05-31 20:56:40.858084 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-05-31 20:56:40.858095 | orchestrator | 2025-05-31 20:56:40.858106 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-05-31 20:56:40.858116 | orchestrator | Saturday 31 May 2025 20:54:43 +0000 (0:00:09.457) 0:02:22.275 ********** 2025-05-31 20:56:40.858127 | orchestrator | ok: [testbed-manager] 2025-05-31 20:56:40.858137 | orchestrator | 2025-05-31 20:56:40.858148 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-05-31 20:56:40.858158 | orchestrator | Saturday 31 May 2025 20:54:44 +0000 (0:00:00.689) 0:02:22.964 ********** 2025-05-31 20:56:40.858169 | orchestrator | changed: [testbed-manager] 2025-05-31 20:56:40.858187 | orchestrator | 2025-05-31 20:56:40.858198 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-05-31 20:56:40.858208 | orchestrator | Saturday 31 May 2025 20:54:44 +0000 (0:00:00.378) 0:02:23.342 ********** 2025-05-31 20:56:40.858219 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-05-31 20:56:40.858230 | orchestrator | 2025-05-31 20:56:40.858249 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-05-31 20:56:40.858260 | orchestrator | Saturday 31 May 2025 20:54:45 +0000 (0:00:00.943) 0:02:24.286 ********** 2025-05-31 20:56:40.858271 | orchestrator | changed: [testbed-manager] 2025-05-31 20:56:40.858281 | orchestrator | 2025-05-31 20:56:40.858292 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-05-31 20:56:40.858302 | orchestrator | Saturday 31 May 2025 20:54:46 +0000 (0:00:00.842) 0:02:25.129 ********** 2025-05-31 20:56:40.858313 | orchestrator | changed: [testbed-manager] 2025-05-31 20:56:40.858324 | orchestrator | 2025-05-31 20:56:40.858335 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-05-31 20:56:40.858345 | orchestrator | Saturday 31 May 2025 20:54:47 +0000 (0:00:00.620) 0:02:25.749 ********** 2025-05-31 20:56:40.858356 | orchestrator | changed: [testbed-manager -> localhost] 2025-05-31 20:56:40.858367 | orchestrator | 2025-05-31 20:56:40.858378 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-05-31 20:56:40.858388 | orchestrator | Saturday 31 May 2025 20:54:48 +0000 (0:00:01.485) 0:02:27.235 ********** 2025-05-31 20:56:40.858399 | orchestrator | changed: [testbed-manager -> localhost] 2025-05-31 20:56:40.858409 | orchestrator | 2025-05-31 20:56:40.858420 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-05-31 20:56:40.858430 | orchestrator | Saturday 31 May 2025 20:54:49 +0000 (0:00:00.812) 0:02:28.047 ********** 2025-05-31 20:56:40.858441 | orchestrator | changed: [testbed-manager] 2025-05-31 20:56:40.858452 | orchestrator | 2025-05-31 20:56:40.858463 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-05-31 20:56:40.858473 | orchestrator | Saturday 31 May 2025 20:54:50 +0000 (0:00:00.390) 0:02:28.438 ********** 2025-05-31 20:56:40.858483 | orchestrator | changed: [testbed-manager] 2025-05-31 20:56:40.858494 | orchestrator | 2025-05-31 20:56:40.858504 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2025-05-31 20:56:40.858515 | orchestrator | 2025-05-31 20:56:40.858526 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2025-05-31 20:56:40.858536 | orchestrator | Saturday 31 May 2025 20:54:50 +0000 (0:00:00.441) 0:02:28.880 ********** 2025-05-31 20:56:40.858547 | orchestrator | ok: [testbed-manager] 2025-05-31 20:56:40.858557 | orchestrator | 2025-05-31 20:56:40.858568 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2025-05-31 20:56:40.858579 | orchestrator | Saturday 31 May 2025 20:54:50 +0000 (0:00:00.198) 0:02:29.079 ********** 2025-05-31 20:56:40.858589 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2025-05-31 20:56:40.858600 | orchestrator | 2025-05-31 20:56:40.858610 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2025-05-31 20:56:40.858621 | orchestrator | Saturday 31 May 2025 20:54:50 +0000 (0:00:00.248) 0:02:29.327 ********** 2025-05-31 20:56:40.858631 | orchestrator | ok: [testbed-manager] 2025-05-31 20:56:40.858642 | orchestrator | 2025-05-31 20:56:40.858653 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2025-05-31 20:56:40.858663 | orchestrator | Saturday 31 May 2025 20:54:52 +0000 (0:00:01.615) 0:02:30.943 ********** 2025-05-31 20:56:40.858674 | orchestrator | ok: [testbed-manager] 2025-05-31 20:56:40.858684 | orchestrator | 2025-05-31 20:56:40.858695 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2025-05-31 20:56:40.858706 | orchestrator | Saturday 31 May 2025 20:54:54 +0000 (0:00:01.655) 0:02:32.598 ********** 2025-05-31 20:56:40.858716 | orchestrator | changed: [testbed-manager] 2025-05-31 20:56:40.858734 | orchestrator | 2025-05-31 20:56:40.858796 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2025-05-31 20:56:40.858809 | orchestrator | Saturday 31 May 2025 20:54:54 +0000 (0:00:00.761) 0:02:33.360 ********** 2025-05-31 20:56:40.858821 | orchestrator | ok: [testbed-manager] 2025-05-31 20:56:40.858832 | orchestrator | 2025-05-31 20:56:40.858843 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2025-05-31 20:56:40.858853 | orchestrator | Saturday 31 May 2025 20:54:55 +0000 (0:00:00.431) 0:02:33.791 ********** 2025-05-31 20:56:40.858864 | orchestrator | changed: [testbed-manager] 2025-05-31 20:56:40.858875 | orchestrator | 2025-05-31 20:56:40.858886 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2025-05-31 20:56:40.858897 | orchestrator | Saturday 31 May 2025 20:55:00 +0000 (0:00:05.115) 0:02:38.907 ********** 2025-05-31 20:56:40.858907 | orchestrator | changed: [testbed-manager] 2025-05-31 20:56:40.858918 | orchestrator | 2025-05-31 20:56:40.858929 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2025-05-31 20:56:40.858939 | orchestrator | Saturday 31 May 2025 20:55:10 +0000 (0:00:09.793) 0:02:48.701 ********** 2025-05-31 20:56:40.858950 | orchestrator | ok: [testbed-manager] 2025-05-31 20:56:40.858961 | orchestrator | 2025-05-31 20:56:40.858971 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2025-05-31 20:56:40.858982 | orchestrator | 2025-05-31 20:56:40.858994 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2025-05-31 20:56:40.859014 | orchestrator | Saturday 31 May 2025 20:55:10 +0000 (0:00:00.443) 0:02:49.144 ********** 2025-05-31 20:56:40.859032 | orchestrator | ok: [testbed-node-0] 2025-05-31 20:56:40.859051 | orchestrator | ok: [testbed-node-1] 2025-05-31 20:56:40.859070 | orchestrator | ok: [testbed-node-2] 2025-05-31 20:56:40.859089 | orchestrator | 2025-05-31 20:56:40.859108 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2025-05-31 20:56:40.859127 | orchestrator | Saturday 31 May 2025 20:55:11 +0000 (0:00:00.513) 0:02:49.658 ********** 2025-05-31 20:56:40.859146 | orchestrator | skipping: [testbed-node-0] 2025-05-31 20:56:40.859166 | orchestrator | skipping: [testbed-node-1] 2025-05-31 20:56:40.859177 | orchestrator | skipping: [testbed-node-2] 2025-05-31 20:56:40.859187 | orchestrator | 2025-05-31 20:56:40.859198 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2025-05-31 20:56:40.859209 | orchestrator | Saturday 31 May 2025 20:55:11 +0000 (0:00:00.355) 0:02:50.013 ********** 2025-05-31 20:56:40.859856 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 20:56:40.859880 | orchestrator | 2025-05-31 20:56:40.859891 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2025-05-31 20:56:40.859913 | orchestrator | Saturday 31 May 2025 20:55:12 +0000 (0:00:00.618) 0:02:50.632 ********** 2025-05-31 20:56:40.859923 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-05-31 20:56:40.859933 | orchestrator | 2025-05-31 20:56:40.859943 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2025-05-31 20:56:40.859952 | orchestrator | Saturday 31 May 2025 20:55:13 +0000 (0:00:00.924) 0:02:51.557 ********** 2025-05-31 20:56:40.859962 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-31 20:56:40.859972 | orchestrator | 2025-05-31 20:56:40.859981 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2025-05-31 20:56:40.859991 | orchestrator | Saturday 31 May 2025 20:55:13 +0000 (0:00:00.752) 0:02:52.309 ********** 2025-05-31 20:56:40.860000 | orchestrator | skipping: [testbed-node-0] 2025-05-31 20:56:40.860010 | orchestrator | 2025-05-31 20:56:40.860019 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2025-05-31 20:56:40.860029 | orchestrator | Saturday 31 May 2025 20:55:14 +0000 (0:00:00.519) 0:02:52.829 ********** 2025-05-31 20:56:40.860039 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-31 20:56:40.860048 | orchestrator | 2025-05-31 20:56:40.860057 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2025-05-31 20:56:40.860077 | orchestrator | Saturday 31 May 2025 20:55:15 +0000 (0:00:00.985) 0:02:53.814 ********** 2025-05-31 20:56:40.860087 | orchestrator | skipping: [testbed-node-0] 2025-05-31 20:56:40.860096 | orchestrator | 2025-05-31 20:56:40.860106 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2025-05-31 20:56:40.860115 | orchestrator | Saturday 31 May 2025 20:55:15 +0000 (0:00:00.209) 0:02:54.024 ********** 2025-05-31 20:56:40.860125 | orchestrator | skipping: [testbed-node-0] 2025-05-31 20:56:40.860134 | orchestrator | 2025-05-31 20:56:40.860144 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2025-05-31 20:56:40.860153 | orchestrator | Saturday 31 May 2025 20:55:15 +0000 (0:00:00.241) 0:02:54.266 ********** 2025-05-31 20:56:40.860163 | orchestrator | skipping: [testbed-node-0] 2025-05-31 20:56:40.860172 | orchestrator | 2025-05-31 20:56:40.860182 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2025-05-31 20:56:40.860191 | orchestrator | Saturday 31 May 2025 20:55:16 +0000 (0:00:00.283) 0:02:54.549 ********** 2025-05-31 20:56:40.860201 | orchestrator | skipping: [testbed-node-0] 2025-05-31 20:56:40.860210 | orchestrator | 2025-05-31 20:56:40.860220 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2025-05-31 20:56:40.860229 | orchestrator | Saturday 31 May 2025 20:55:16 +0000 (0:00:00.205) 0:02:54.754 ********** 2025-05-31 20:56:40.860239 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-05-31 20:56:40.860248 | orchestrator | 2025-05-31 20:56:40.860258 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2025-05-31 20:56:40.860267 | orchestrator | Saturday 31 May 2025 20:55:20 +0000 (0:00:04.398) 0:02:59.153 ********** 2025-05-31 20:56:40.860277 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2025-05-31 20:56:40.860287 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2025-05-31 20:56:40.860302 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2025-05-31 20:56:40.860312 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2025-05-31 20:56:40.860321 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2025-05-31 20:56:40.860331 | orchestrator | 2025-05-31 20:56:40.860340 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2025-05-31 20:56:40.860350 | orchestrator | Saturday 31 May 2025 20:56:13 +0000 (0:00:52.822) 0:03:51.976 ********** 2025-05-31 20:56:40.860359 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-31 20:56:40.860369 | orchestrator | 2025-05-31 20:56:40.860378 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2025-05-31 20:56:40.860388 | orchestrator | Saturday 31 May 2025 20:56:14 +0000 (0:00:01.260) 0:03:53.236 ********** 2025-05-31 20:56:40.860397 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-05-31 20:56:40.860407 | orchestrator | 2025-05-31 20:56:40.860416 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2025-05-31 20:56:40.860426 | orchestrator | Saturday 31 May 2025 20:56:16 +0000 (0:00:01.332) 0:03:54.568 ********** 2025-05-31 20:56:40.860435 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-05-31 20:56:40.860445 | orchestrator | 2025-05-31 20:56:40.860455 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2025-05-31 20:56:40.860464 | orchestrator | Saturday 31 May 2025 20:56:17 +0000 (0:00:01.064) 0:03:55.633 ********** 2025-05-31 20:56:40.860474 | orchestrator | skipping: [testbed-node-0] 2025-05-31 20:56:40.860483 | orchestrator | 2025-05-31 20:56:40.860493 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2025-05-31 20:56:40.860503 | orchestrator | Saturday 31 May 2025 20:56:17 +0000 (0:00:00.186) 0:03:55.819 ********** 2025-05-31 20:56:40.860512 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2025-05-31 20:56:40.860522 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2025-05-31 20:56:40.860538 | orchestrator | 2025-05-31 20:56:40.860547 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2025-05-31 20:56:40.860557 | orchestrator | Saturday 31 May 2025 20:56:19 +0000 (0:00:02.406) 0:03:58.226 ********** 2025-05-31 20:56:40.860567 | orchestrator | skipping: [testbed-node-0] 2025-05-31 20:56:40.860576 | orchestrator | skipping: [testbed-node-1] 2025-05-31 20:56:40.860586 | orchestrator | skipping: [testbed-node-2] 2025-05-31 20:56:40.860681 | orchestrator | 2025-05-31 20:56:40.860694 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2025-05-31 20:56:40.860704 | orchestrator | Saturday 31 May 2025 20:56:20 +0000 (0:00:00.266) 0:03:58.492 ********** 2025-05-31 20:56:40.860713 | orchestrator | ok: [testbed-node-0] 2025-05-31 20:56:40.860723 | orchestrator | ok: [testbed-node-1] 2025-05-31 20:56:40.860732 | orchestrator | ok: [testbed-node-2] 2025-05-31 20:56:40.860762 | orchestrator | 2025-05-31 20:56:40.860780 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2025-05-31 20:56:40.860790 | orchestrator | 2025-05-31 20:56:40.860800 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2025-05-31 20:56:40.860810 | orchestrator | Saturday 31 May 2025 20:56:20 +0000 (0:00:00.759) 0:03:59.252 ********** 2025-05-31 20:56:40.860819 | orchestrator | ok: [testbed-manager] 2025-05-31 20:56:40.860829 | orchestrator | 2025-05-31 20:56:40.860838 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2025-05-31 20:56:40.860848 | orchestrator | Saturday 31 May 2025 20:56:20 +0000 (0:00:00.128) 0:03:59.380 ********** 2025-05-31 20:56:40.860858 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2025-05-31 20:56:40.860868 | orchestrator | 2025-05-31 20:56:40.860877 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2025-05-31 20:56:40.860887 | orchestrator | Saturday 31 May 2025 20:56:21 +0000 (0:00:00.301) 0:03:59.682 ********** 2025-05-31 20:56:40.860896 | orchestrator | changed: [testbed-manager] 2025-05-31 20:56:40.860906 | orchestrator | 2025-05-31 20:56:40.860915 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2025-05-31 20:56:40.860925 | orchestrator | 2025-05-31 20:56:40.860935 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2025-05-31 20:56:40.860944 | orchestrator | Saturday 31 May 2025 20:56:26 +0000 (0:00:05.034) 0:04:04.716 ********** 2025-05-31 20:56:40.860954 | orchestrator | ok: [testbed-node-3] 2025-05-31 20:56:40.860963 | orchestrator | ok: [testbed-node-4] 2025-05-31 20:56:40.860973 | orchestrator | ok: [testbed-node-5] 2025-05-31 20:56:40.860982 | orchestrator | ok: [testbed-node-0] 2025-05-31 20:56:40.860992 | orchestrator | ok: [testbed-node-1] 2025-05-31 20:56:40.861001 | orchestrator | ok: [testbed-node-2] 2025-05-31 20:56:40.861011 | orchestrator | 2025-05-31 20:56:40.861020 | orchestrator | TASK [Manage labels] *********************************************************** 2025-05-31 20:56:40.861030 | orchestrator | Saturday 31 May 2025 20:56:26 +0000 (0:00:00.538) 0:04:05.254 ********** 2025-05-31 20:56:40.861039 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-05-31 20:56:40.861049 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-05-31 20:56:40.861058 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-05-31 20:56:40.861068 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-05-31 20:56:40.861077 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-05-31 20:56:40.861086 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-05-31 20:56:40.861096 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-05-31 20:56:40.861105 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2025-05-31 20:56:40.861115 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-05-31 20:56:40.861138 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-05-31 20:56:40.861148 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2025-05-31 20:56:40.861157 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-05-31 20:56:40.861167 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2025-05-31 20:56:40.861177 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-05-31 20:56:40.861186 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-05-31 20:56:40.861195 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-05-31 20:56:40.861205 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-05-31 20:56:40.861214 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-05-31 20:56:40.861223 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-05-31 20:56:40.861233 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-05-31 20:56:40.861242 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-05-31 20:56:40.861252 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-05-31 20:56:40.861261 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-05-31 20:56:40.861270 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-05-31 20:56:40.861280 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-05-31 20:56:40.861289 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-05-31 20:56:40.861299 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-05-31 20:56:40.861308 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-05-31 20:56:40.861317 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-05-31 20:56:40.861327 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-05-31 20:56:40.861336 | orchestrator | 2025-05-31 20:56:40.861351 | orchestrator | TASK [Manage annotations] ****************************************************** 2025-05-31 20:56:40.861361 | orchestrator | Saturday 31 May 2025 20:56:37 +0000 (0:00:10.739) 0:04:15.994 ********** 2025-05-31 20:56:40.861371 | orchestrator | skipping: [testbed-node-3] 2025-05-31 20:56:40.861380 | orchestrator | skipping: [testbed-node-4] 2025-05-31 20:56:40.861390 | orchestrator | skipping: [testbed-node-5] 2025-05-31 20:56:40.861400 | orchestrator | skipping: [testbed-node-0] 2025-05-31 20:56:40.861409 | orchestrator | skipping: [testbed-node-1] 2025-05-31 20:56:40.861419 | orchestrator | skipping: [testbed-node-2] 2025-05-31 20:56:40.861428 | orchestrator | 2025-05-31 20:56:40.861437 | orchestrator | TASK [Manage taints] *********************************************************** 2025-05-31 20:56:40.861447 | orchestrator | Saturday 31 May 2025 20:56:37 +0000 (0:00:00.377) 0:04:16.371 ********** 2025-05-31 20:56:40.861457 | orchestrator | skipping: [testbed-node-3] 2025-05-31 20:56:40.861466 | orchestrator | skipping: [testbed-node-4] 2025-05-31 20:56:40.861476 | orchestrator | skipping: [testbed-node-5] 2025-05-31 20:56:40.861485 | orchestrator | skipping: [testbed-node-0] 2025-05-31 20:56:40.861495 | orchestrator | skipping: [testbed-node-1] 2025-05-31 20:56:40.861504 | orchestrator | skipping: [testbed-node-2] 2025-05-31 20:56:40.861514 | orchestrator | 2025-05-31 20:56:40.861523 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 20:56:40.861533 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 20:56:40.861551 | orchestrator | testbed-node-0 : ok=46  changed=21  unreachable=0 failed=0 skipped=27  rescued=0 ignored=0 2025-05-31 20:56:40.861561 | orchestrator | testbed-node-1 : ok=34  changed=14  unreachable=0 failed=0 skipped=24  rescued=0 ignored=0 2025-05-31 20:56:40.861571 | orchestrator | testbed-node-2 : ok=34  changed=14  unreachable=0 failed=0 skipped=24  rescued=0 ignored=0 2025-05-31 20:56:40.861581 | orchestrator | testbed-node-3 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-05-31 20:56:40.861591 | orchestrator | testbed-node-4 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-05-31 20:56:40.861600 | orchestrator | testbed-node-5 : ok=14  changed=6  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-05-31 20:56:40.861610 | orchestrator | 2025-05-31 20:56:40.861620 | orchestrator | 2025-05-31 20:56:40.861629 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-31 20:56:40.861639 | orchestrator | Saturday 31 May 2025 20:56:38 +0000 (0:00:00.439) 0:04:16.810 ********** 2025-05-31 20:56:40.861653 | orchestrator | =============================================================================== 2025-05-31 20:56:40.861662 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 56.22s 2025-05-31 20:56:40.861672 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 52.82s 2025-05-31 20:56:40.861681 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 15.87s 2025-05-31 20:56:40.861691 | orchestrator | Manage labels ---------------------------------------------------------- 10.74s 2025-05-31 20:56:40.861700 | orchestrator | kubectl : Install required packages ------------------------------------- 9.79s 2025-05-31 20:56:40.861710 | orchestrator | k3s_agent : Manage k3s service ------------------------------------------ 9.46s 2025-05-31 20:56:40.861719 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 6.55s 2025-05-31 20:56:40.861728 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 5.12s 2025-05-31 20:56:40.861738 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.03s 2025-05-31 20:56:40.861801 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 4.40s 2025-05-31 20:56:40.861811 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.12s 2025-05-31 20:56:40.861821 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.68s 2025-05-31 20:56:40.861831 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.67s 2025-05-31 20:56:40.861840 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 2.55s 2025-05-31 20:56:40.861849 | orchestrator | k3s_server_post : Test for BGP config resources ------------------------- 2.41s 2025-05-31 20:56:40.861859 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 2.38s 2025-05-31 20:56:40.861868 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 1.98s 2025-05-31 20:56:40.861877 | orchestrator | k3s_server : Copy K3s service file -------------------------------------- 1.67s 2025-05-31 20:56:40.861887 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 1.66s 2025-05-31 20:56:40.861901 | orchestrator | k3s_agent : Configure the k3s service ----------------------------------- 1.62s 2025-05-31 20:56:40.861917 | orchestrator | 2025-05-31 20:56:40 | INFO  | Task 0b023d43-8db6-45da-a3ef-ce0bc9e0da26 is in state STARTED 2025-05-31 20:56:40.861929 | orchestrator | 2025-05-31 20:56:40 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:56:43.886670 | orchestrator | 2025-05-31 20:56:43 | INFO  | Task fd314325-f586-41be-8318-ba939bda8e1c is in state STARTED 2025-05-31 20:56:43.887226 | orchestrator | 2025-05-31 20:56:43 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:56:43.888130 | orchestrator | 2025-05-31 20:56:43 | INFO  | Task b964d5a7-5bef-4a3e-97e3-ff548ef05d1f is in state STARTED 2025-05-31 20:56:43.888994 | orchestrator | 2025-05-31 20:56:43 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:56:43.889798 | orchestrator | 2025-05-31 20:56:43 | INFO  | Task 8f5deb58-d4bf-4622-9d9c-940b52794a00 is in state STARTED 2025-05-31 20:56:43.890541 | orchestrator | 2025-05-31 20:56:43 | INFO  | Task 0b023d43-8db6-45da-a3ef-ce0bc9e0da26 is in state STARTED 2025-05-31 20:56:43.890577 | orchestrator | 2025-05-31 20:56:43 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:56:46.941942 | orchestrator | 2025-05-31 20:56:46 | INFO  | Task fd314325-f586-41be-8318-ba939bda8e1c is in state STARTED 2025-05-31 20:56:46.942146 | orchestrator | 2025-05-31 20:56:46 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:56:46.948532 | orchestrator | 2025-05-31 20:56:46 | INFO  | Task b964d5a7-5bef-4a3e-97e3-ff548ef05d1f is in state STARTED 2025-05-31 20:56:46.953855 | orchestrator | 2025-05-31 20:56:46 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:56:46.957300 | orchestrator | 2025-05-31 20:56:46 | INFO  | Task 8f5deb58-d4bf-4622-9d9c-940b52794a00 is in state STARTED 2025-05-31 20:56:46.959995 | orchestrator | 2025-05-31 20:56:46 | INFO  | Task 0b023d43-8db6-45da-a3ef-ce0bc9e0da26 is in state SUCCESS 2025-05-31 20:56:46.960044 | orchestrator | 2025-05-31 20:56:46 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:56:50.051455 | orchestrator | 2025-05-31 20:56:50 | INFO  | Task fd314325-f586-41be-8318-ba939bda8e1c is in state STARTED 2025-05-31 20:56:50.053549 | orchestrator | 2025-05-31 20:56:50 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:56:50.056907 | orchestrator | 2025-05-31 20:56:50 | INFO  | Task b964d5a7-5bef-4a3e-97e3-ff548ef05d1f is in state STARTED 2025-05-31 20:56:50.060336 | orchestrator | 2025-05-31 20:56:50 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:56:50.062931 | orchestrator | 2025-05-31 20:56:50 | INFO  | Task 8f5deb58-d4bf-4622-9d9c-940b52794a00 is in state STARTED 2025-05-31 20:56:50.062970 | orchestrator | 2025-05-31 20:56:50 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:56:53.126568 | orchestrator | 2025-05-31 20:56:53 | INFO  | Task fd314325-f586-41be-8318-ba939bda8e1c is in state STARTED 2025-05-31 20:56:53.129381 | orchestrator | 2025-05-31 20:56:53 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:56:53.129449 | orchestrator | 2025-05-31 20:56:53 | INFO  | Task b964d5a7-5bef-4a3e-97e3-ff548ef05d1f is in state STARTED 2025-05-31 20:56:53.132379 | orchestrator | 2025-05-31 20:56:53 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:56:53.133485 | orchestrator | 2025-05-31 20:56:53 | INFO  | Task 8f5deb58-d4bf-4622-9d9c-940b52794a00 is in state SUCCESS 2025-05-31 20:56:53.134194 | orchestrator | 2025-05-31 20:56:53 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:56:56.184636 | orchestrator | 2025-05-31 20:56:56 | INFO  | Task fd314325-f586-41be-8318-ba939bda8e1c is in state STARTED 2025-05-31 20:56:56.185172 | orchestrator | 2025-05-31 20:56:56 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:56:56.185976 | orchestrator | 2025-05-31 20:56:56 | INFO  | Task b964d5a7-5bef-4a3e-97e3-ff548ef05d1f is in state STARTED 2025-05-31 20:56:56.186892 | orchestrator | 2025-05-31 20:56:56 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:56:56.186918 | orchestrator | 2025-05-31 20:56:56 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:56:59.230350 | orchestrator | 2025-05-31 20:56:59 | INFO  | Task fd314325-f586-41be-8318-ba939bda8e1c is in state STARTED 2025-05-31 20:56:59.231814 | orchestrator | 2025-05-31 20:56:59 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:56:59.234331 | orchestrator | 2025-05-31 20:56:59 | INFO  | Task b964d5a7-5bef-4a3e-97e3-ff548ef05d1f is in state STARTED 2025-05-31 20:56:59.236203 | orchestrator | 2025-05-31 20:56:59 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:56:59.236218 | orchestrator | 2025-05-31 20:56:59 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:57:02.290556 | orchestrator | 2025-05-31 20:57:02 | INFO  | Task fd314325-f586-41be-8318-ba939bda8e1c is in state STARTED 2025-05-31 20:57:02.290697 | orchestrator | 2025-05-31 20:57:02 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:57:02.290728 | orchestrator | 2025-05-31 20:57:02 | INFO  | Task b964d5a7-5bef-4a3e-97e3-ff548ef05d1f is in state STARTED 2025-05-31 20:57:02.292128 | orchestrator | 2025-05-31 20:57:02 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:57:02.292184 | orchestrator | 2025-05-31 20:57:02 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:57:05.349162 | orchestrator | 2025-05-31 20:57:05 | INFO  | Task fd314325-f586-41be-8318-ba939bda8e1c is in state STARTED 2025-05-31 20:57:05.349854 | orchestrator | 2025-05-31 20:57:05 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:57:05.354893 | orchestrator | 2025-05-31 20:57:05 | INFO  | Task b964d5a7-5bef-4a3e-97e3-ff548ef05d1f is in state STARTED 2025-05-31 20:57:05.357753 | orchestrator | 2025-05-31 20:57:05 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:57:05.357990 | orchestrator | 2025-05-31 20:57:05 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:57:08.401423 | orchestrator | 2025-05-31 20:57:08 | INFO  | Task fd314325-f586-41be-8318-ba939bda8e1c is in state STARTED 2025-05-31 20:57:08.405535 | orchestrator | 2025-05-31 20:57:08 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:57:08.405995 | orchestrator | 2025-05-31 20:57:08 | INFO  | Task b964d5a7-5bef-4a3e-97e3-ff548ef05d1f is in state STARTED 2025-05-31 20:57:08.408688 | orchestrator | 2025-05-31 20:57:08 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:57:08.408741 | orchestrator | 2025-05-31 20:57:08 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:57:11.448502 | orchestrator | 2025-05-31 20:57:11 | INFO  | Task fd314325-f586-41be-8318-ba939bda8e1c is in state STARTED 2025-05-31 20:57:11.448777 | orchestrator | 2025-05-31 20:57:11 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:57:11.449739 | orchestrator | 2025-05-31 20:57:11 | INFO  | Task b964d5a7-5bef-4a3e-97e3-ff548ef05d1f is in state STARTED 2025-05-31 20:57:11.451442 | orchestrator | 2025-05-31 20:57:11 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:57:11.451499 | orchestrator | 2025-05-31 20:57:11 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:57:14.487634 | orchestrator | 2025-05-31 20:57:14 | INFO  | Task fd314325-f586-41be-8318-ba939bda8e1c is in state STARTED 2025-05-31 20:57:14.489267 | orchestrator | 2025-05-31 20:57:14 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:57:14.491173 | orchestrator | 2025-05-31 20:57:14 | INFO  | Task b964d5a7-5bef-4a3e-97e3-ff548ef05d1f is in state STARTED 2025-05-31 20:57:14.492933 | orchestrator | 2025-05-31 20:57:14 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:57:14.492979 | orchestrator | 2025-05-31 20:57:14 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:57:17.543690 | orchestrator | 2025-05-31 20:57:17 | INFO  | Task fd314325-f586-41be-8318-ba939bda8e1c is in state STARTED 2025-05-31 20:57:17.545632 | orchestrator | 2025-05-31 20:57:17 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:57:17.547473 | orchestrator | 2025-05-31 20:57:17 | INFO  | Task b964d5a7-5bef-4a3e-97e3-ff548ef05d1f is in state STARTED 2025-05-31 20:57:17.549593 | orchestrator | 2025-05-31 20:57:17 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:57:17.549756 | orchestrator | 2025-05-31 20:57:17 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:57:20.598379 | orchestrator | 2025-05-31 20:57:20 | INFO  | Task fd314325-f586-41be-8318-ba939bda8e1c is in state STARTED 2025-05-31 20:57:20.600158 | orchestrator | 2025-05-31 20:57:20 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:57:20.601420 | orchestrator | 2025-05-31 20:57:20 | INFO  | Task b964d5a7-5bef-4a3e-97e3-ff548ef05d1f is in state STARTED 2025-05-31 20:57:20.602705 | orchestrator | 2025-05-31 20:57:20 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:57:20.602737 | orchestrator | 2025-05-31 20:57:20 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:57:23.643501 | orchestrator | 2025-05-31 20:57:23 | INFO  | Task fd314325-f586-41be-8318-ba939bda8e1c is in state STARTED 2025-05-31 20:57:23.645619 | orchestrator | 2025-05-31 20:57:23 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:57:23.647780 | orchestrator | 2025-05-31 20:57:23 | INFO  | Task b964d5a7-5bef-4a3e-97e3-ff548ef05d1f is in state STARTED 2025-05-31 20:57:23.649379 | orchestrator | 2025-05-31 20:57:23 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:57:23.649586 | orchestrator | 2025-05-31 20:57:23 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:57:26.696089 | orchestrator | 2025-05-31 20:57:26 | INFO  | Task fd314325-f586-41be-8318-ba939bda8e1c is in state STARTED 2025-05-31 20:57:26.696204 | orchestrator | 2025-05-31 20:57:26 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:57:26.700449 | orchestrator | 2025-05-31 20:57:26 | INFO  | Task b964d5a7-5bef-4a3e-97e3-ff548ef05d1f is in state STARTED 2025-05-31 20:57:26.700495 | orchestrator | 2025-05-31 20:57:26 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:57:26.700507 | orchestrator | 2025-05-31 20:57:26 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:57:29.753531 | orchestrator | 2025-05-31 20:57:29 | INFO  | Task fd314325-f586-41be-8318-ba939bda8e1c is in state STARTED 2025-05-31 20:57:29.753733 | orchestrator | 2025-05-31 20:57:29 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:57:29.754591 | orchestrator | 2025-05-31 20:57:29 | INFO  | Task b964d5a7-5bef-4a3e-97e3-ff548ef05d1f is in state STARTED 2025-05-31 20:57:29.764026 | orchestrator | 2025-05-31 20:57:29 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:57:29.764099 | orchestrator | 2025-05-31 20:57:29 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:57:32.792494 | orchestrator | 2025-05-31 20:57:32 | INFO  | Task fd314325-f586-41be-8318-ba939bda8e1c is in state STARTED 2025-05-31 20:57:32.792608 | orchestrator | 2025-05-31 20:57:32 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:57:32.793153 | orchestrator | 2025-05-31 20:57:32 | INFO  | Task b964d5a7-5bef-4a3e-97e3-ff548ef05d1f is in state STARTED 2025-05-31 20:57:32.793691 | orchestrator | 2025-05-31 20:57:32 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:57:32.793750 | orchestrator | 2025-05-31 20:57:32 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:57:35.823385 | orchestrator | 2025-05-31 20:57:35 | INFO  | Task fd314325-f586-41be-8318-ba939bda8e1c is in state SUCCESS 2025-05-31 20:57:35.824297 | orchestrator | 2025-05-31 20:57:35.824339 | orchestrator | 2025-05-31 20:57:35.824353 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2025-05-31 20:57:35.824368 | orchestrator | 2025-05-31 20:57:35.824381 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-05-31 20:57:35.824393 | orchestrator | Saturday 31 May 2025 20:56:42 +0000 (0:00:00.195) 0:00:00.195 ********** 2025-05-31 20:57:35.824405 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-05-31 20:57:35.824416 | orchestrator | 2025-05-31 20:57:35.824427 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-05-31 20:57:35.824437 | orchestrator | Saturday 31 May 2025 20:56:43 +0000 (0:00:00.829) 0:00:01.025 ********** 2025-05-31 20:57:35.824448 | orchestrator | changed: [testbed-manager] 2025-05-31 20:57:35.824460 | orchestrator | 2025-05-31 20:57:35.824471 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2025-05-31 20:57:35.824482 | orchestrator | Saturday 31 May 2025 20:56:44 +0000 (0:00:01.039) 0:00:02.064 ********** 2025-05-31 20:57:35.824493 | orchestrator | changed: [testbed-manager] 2025-05-31 20:57:35.824504 | orchestrator | 2025-05-31 20:57:35.824514 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 20:57:35.824525 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 20:57:35.824538 | orchestrator | 2025-05-31 20:57:35.824549 | orchestrator | 2025-05-31 20:57:35.824560 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-31 20:57:35.824571 | orchestrator | Saturday 31 May 2025 20:56:44 +0000 (0:00:00.420) 0:00:02.484 ********** 2025-05-31 20:57:35.824582 | orchestrator | =============================================================================== 2025-05-31 20:57:35.824592 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.04s 2025-05-31 20:57:35.824603 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.83s 2025-05-31 20:57:35.824613 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.42s 2025-05-31 20:57:35.824624 | orchestrator | 2025-05-31 20:57:35.824635 | orchestrator | 2025-05-31 20:57:35.824646 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-05-31 20:57:35.824656 | orchestrator | 2025-05-31 20:57:35.824667 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-05-31 20:57:35.824678 | orchestrator | Saturday 31 May 2025 20:56:42 +0000 (0:00:00.149) 0:00:00.149 ********** 2025-05-31 20:57:35.824688 | orchestrator | ok: [testbed-manager] 2025-05-31 20:57:35.824700 | orchestrator | 2025-05-31 20:57:35.824711 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-05-31 20:57:35.824722 | orchestrator | Saturday 31 May 2025 20:56:43 +0000 (0:00:00.650) 0:00:00.799 ********** 2025-05-31 20:57:35.824732 | orchestrator | ok: [testbed-manager] 2025-05-31 20:57:35.824743 | orchestrator | 2025-05-31 20:57:35.824754 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-05-31 20:57:35.824787 | orchestrator | Saturday 31 May 2025 20:56:43 +0000 (0:00:00.570) 0:00:01.370 ********** 2025-05-31 20:57:35.824798 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-05-31 20:57:35.824808 | orchestrator | 2025-05-31 20:57:35.824855 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-05-31 20:57:35.824868 | orchestrator | Saturday 31 May 2025 20:56:44 +0000 (0:00:00.697) 0:00:02.067 ********** 2025-05-31 20:57:35.824879 | orchestrator | changed: [testbed-manager] 2025-05-31 20:57:35.824890 | orchestrator | 2025-05-31 20:57:35.824945 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-05-31 20:57:35.824958 | orchestrator | Saturday 31 May 2025 20:56:45 +0000 (0:00:01.248) 0:00:03.316 ********** 2025-05-31 20:57:35.824974 | orchestrator | changed: [testbed-manager] 2025-05-31 20:57:35.824992 | orchestrator | 2025-05-31 20:57:35.825009 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-05-31 20:57:35.825029 | orchestrator | Saturday 31 May 2025 20:56:46 +0000 (0:00:00.871) 0:00:04.187 ********** 2025-05-31 20:57:35.825044 | orchestrator | changed: [testbed-manager -> localhost] 2025-05-31 20:57:35.825055 | orchestrator | 2025-05-31 20:57:35.825065 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-05-31 20:57:35.825076 | orchestrator | Saturday 31 May 2025 20:56:48 +0000 (0:00:01.785) 0:00:05.973 ********** 2025-05-31 20:57:35.825087 | orchestrator | changed: [testbed-manager -> localhost] 2025-05-31 20:57:35.825097 | orchestrator | 2025-05-31 20:57:35.825108 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-05-31 20:57:35.825119 | orchestrator | Saturday 31 May 2025 20:56:49 +0000 (0:00:00.864) 0:00:06.837 ********** 2025-05-31 20:57:35.825129 | orchestrator | ok: [testbed-manager] 2025-05-31 20:57:35.825140 | orchestrator | 2025-05-31 20:57:35.825151 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-05-31 20:57:35.825162 | orchestrator | Saturday 31 May 2025 20:56:49 +0000 (0:00:00.417) 0:00:07.254 ********** 2025-05-31 20:57:35.825172 | orchestrator | ok: [testbed-manager] 2025-05-31 20:57:35.825183 | orchestrator | 2025-05-31 20:57:35.825193 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 20:57:35.825204 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 20:57:35.825215 | orchestrator | 2025-05-31 20:57:35.825226 | orchestrator | 2025-05-31 20:57:35.825236 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-31 20:57:35.825247 | orchestrator | Saturday 31 May 2025 20:56:49 +0000 (0:00:00.317) 0:00:07.572 ********** 2025-05-31 20:57:35.825273 | orchestrator | =============================================================================== 2025-05-31 20:57:35.825284 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.79s 2025-05-31 20:57:35.825294 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.25s 2025-05-31 20:57:35.825305 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.87s 2025-05-31 20:57:35.825330 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.86s 2025-05-31 20:57:35.825342 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.70s 2025-05-31 20:57:35.825352 | orchestrator | Get home directory of operator user ------------------------------------- 0.65s 2025-05-31 20:57:35.825363 | orchestrator | Create .kube directory -------------------------------------------------- 0.57s 2025-05-31 20:57:35.825373 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.42s 2025-05-31 20:57:35.825384 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.32s 2025-05-31 20:57:35.825395 | orchestrator | 2025-05-31 20:57:35.825405 | orchestrator | 2025-05-31 20:57:35.825416 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-05-31 20:57:35.825426 | orchestrator | 2025-05-31 20:57:35.825448 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-05-31 20:57:35.825459 | orchestrator | Saturday 31 May 2025 20:55:16 +0000 (0:00:00.300) 0:00:00.300 ********** 2025-05-31 20:57:35.825470 | orchestrator | ok: [localhost] => { 2025-05-31 20:57:35.825481 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-05-31 20:57:35.825493 | orchestrator | } 2025-05-31 20:57:35.825504 | orchestrator | 2025-05-31 20:57:35.825515 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-05-31 20:57:35.825526 | orchestrator | Saturday 31 May 2025 20:55:16 +0000 (0:00:00.065) 0:00:00.365 ********** 2025-05-31 20:57:35.825537 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-05-31 20:57:35.825550 | orchestrator | ...ignoring 2025-05-31 20:57:35.825561 | orchestrator | 2025-05-31 20:57:35.825571 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-05-31 20:57:35.825582 | orchestrator | Saturday 31 May 2025 20:55:20 +0000 (0:00:03.691) 0:00:04.057 ********** 2025-05-31 20:57:35.825593 | orchestrator | skipping: [localhost] 2025-05-31 20:57:35.825603 | orchestrator | 2025-05-31 20:57:35.825614 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-05-31 20:57:35.825625 | orchestrator | Saturday 31 May 2025 20:55:20 +0000 (0:00:00.062) 0:00:04.119 ********** 2025-05-31 20:57:35.825635 | orchestrator | ok: [localhost] 2025-05-31 20:57:35.825646 | orchestrator | 2025-05-31 20:57:35.825657 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-31 20:57:35.825667 | orchestrator | 2025-05-31 20:57:35.825678 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-31 20:57:35.825689 | orchestrator | Saturday 31 May 2025 20:55:20 +0000 (0:00:00.270) 0:00:04.390 ********** 2025-05-31 20:57:35.825700 | orchestrator | ok: [testbed-node-0] 2025-05-31 20:57:35.825710 | orchestrator | ok: [testbed-node-1] 2025-05-31 20:57:35.825721 | orchestrator | ok: [testbed-node-2] 2025-05-31 20:57:35.825731 | orchestrator | 2025-05-31 20:57:35.825742 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-31 20:57:35.825753 | orchestrator | Saturday 31 May 2025 20:55:21 +0000 (0:00:00.833) 0:00:05.223 ********** 2025-05-31 20:57:35.825763 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-05-31 20:57:35.825775 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-05-31 20:57:35.825785 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-05-31 20:57:35.825796 | orchestrator | 2025-05-31 20:57:35.825807 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-05-31 20:57:35.825884 | orchestrator | 2025-05-31 20:57:35.825896 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-05-31 20:57:35.825906 | orchestrator | Saturday 31 May 2025 20:55:23 +0000 (0:00:01.926) 0:00:07.150 ********** 2025-05-31 20:57:35.825917 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 20:57:35.825928 | orchestrator | 2025-05-31 20:57:35.825939 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-05-31 20:57:35.825949 | orchestrator | Saturday 31 May 2025 20:55:24 +0000 (0:00:01.141) 0:00:08.291 ********** 2025-05-31 20:57:35.825960 | orchestrator | ok: [testbed-node-0] 2025-05-31 20:57:35.825971 | orchestrator | 2025-05-31 20:57:35.825982 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-05-31 20:57:35.825992 | orchestrator | Saturday 31 May 2025 20:55:25 +0000 (0:00:01.002) 0:00:09.293 ********** 2025-05-31 20:57:35.826003 | orchestrator | skipping: [testbed-node-0] 2025-05-31 20:57:35.826126 | orchestrator | 2025-05-31 20:57:35.826154 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-05-31 20:57:35.826203 | orchestrator | Saturday 31 May 2025 20:55:25 +0000 (0:00:00.375) 0:00:09.669 ********** 2025-05-31 20:57:35.826234 | orchestrator | skipping: [testbed-node-0] 2025-05-31 20:57:35.826252 | orchestrator | 2025-05-31 20:57:35.826273 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-05-31 20:57:35.826290 | orchestrator | Saturday 31 May 2025 20:55:26 +0000 (0:00:00.632) 0:00:10.302 ********** 2025-05-31 20:57:35.826310 | orchestrator | skipping: [testbed-node-0] 2025-05-31 20:57:35.826329 | orchestrator | 2025-05-31 20:57:35.826347 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-05-31 20:57:35.826364 | orchestrator | Saturday 31 May 2025 20:55:26 +0000 (0:00:00.347) 0:00:10.649 ********** 2025-05-31 20:57:35.826383 | orchestrator | skipping: [testbed-node-0] 2025-05-31 20:57:35.826401 | orchestrator | 2025-05-31 20:57:35.826419 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-05-31 20:57:35.826434 | orchestrator | Saturday 31 May 2025 20:55:27 +0000 (0:00:00.469) 0:00:11.119 ********** 2025-05-31 20:57:35.826452 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 20:57:35.826462 | orchestrator | 2025-05-31 20:57:35.826471 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-05-31 20:57:35.826493 | orchestrator | Saturday 31 May 2025 20:55:27 +0000 (0:00:00.766) 0:00:11.886 ********** 2025-05-31 20:57:35.826503 | orchestrator | ok: [testbed-node-0] 2025-05-31 20:57:35.826512 | orchestrator | 2025-05-31 20:57:35.826522 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-05-31 20:57:35.826531 | orchestrator | Saturday 31 May 2025 20:55:28 +0000 (0:00:00.809) 0:00:12.695 ********** 2025-05-31 20:57:35.826541 | orchestrator | skipping: [testbed-node-0] 2025-05-31 20:57:35.826550 | orchestrator | 2025-05-31 20:57:35.826560 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-05-31 20:57:35.826569 | orchestrator | Saturday 31 May 2025 20:55:29 +0000 (0:00:00.326) 0:00:13.022 ********** 2025-05-31 20:57:35.826579 | orchestrator | skipping: [testbed-node-0] 2025-05-31 20:57:35.826588 | orchestrator | 2025-05-31 20:57:35.826598 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-05-31 20:57:35.826607 | orchestrator | Saturday 31 May 2025 20:55:29 +0000 (0:00:00.321) 0:00:13.343 ********** 2025-05-31 20:57:35.826622 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-31 20:57:35.826637 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-31 20:57:35.826658 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-31 20:57:35.826669 | orchestrator | 2025-05-31 20:57:35.826683 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-05-31 20:57:35.826693 | orchestrator | Saturday 31 May 2025 20:55:30 +0000 (0:00:01.449) 0:00:14.793 ********** 2025-05-31 20:57:35.826711 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-31 20:57:35.826722 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-31 20:57:35.826733 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-31 20:57:35.826749 | orchestrator | 2025-05-31 20:57:35.826760 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-05-31 20:57:35.826769 | orchestrator | Saturday 31 May 2025 20:55:33 +0000 (0:00:03.034) 0:00:17.827 ********** 2025-05-31 20:57:35.826779 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-05-31 20:57:35.826789 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-05-31 20:57:35.826799 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-05-31 20:57:35.826808 | orchestrator | 2025-05-31 20:57:35.826840 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-05-31 20:57:35.826850 | orchestrator | Saturday 31 May 2025 20:55:35 +0000 (0:00:01.915) 0:00:19.743 ********** 2025-05-31 20:57:35.826859 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-05-31 20:57:35.826873 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-05-31 20:57:35.826883 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-05-31 20:57:35.826892 | orchestrator | 2025-05-31 20:57:35.826902 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-05-31 20:57:35.826917 | orchestrator | Saturday 31 May 2025 20:55:37 +0000 (0:00:02.042) 0:00:21.785 ********** 2025-05-31 20:57:35.826927 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-05-31 20:57:35.826937 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-05-31 20:57:35.826946 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-05-31 20:57:35.826956 | orchestrator | 2025-05-31 20:57:35.826967 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-05-31 20:57:35.826978 | orchestrator | Saturday 31 May 2025 20:55:39 +0000 (0:00:01.585) 0:00:23.370 ********** 2025-05-31 20:57:35.826990 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-05-31 20:57:35.827000 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-05-31 20:57:35.827011 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-05-31 20:57:35.827022 | orchestrator | 2025-05-31 20:57:35.827033 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-05-31 20:57:35.827044 | orchestrator | Saturday 31 May 2025 20:55:42 +0000 (0:00:02.966) 0:00:26.337 ********** 2025-05-31 20:57:35.827055 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-05-31 20:57:35.827065 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-05-31 20:57:35.827076 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-05-31 20:57:35.827087 | orchestrator | 2025-05-31 20:57:35.827104 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-05-31 20:57:35.827116 | orchestrator | Saturday 31 May 2025 20:55:44 +0000 (0:00:01.678) 0:00:28.015 ********** 2025-05-31 20:57:35.827126 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-05-31 20:57:35.827137 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-05-31 20:57:35.827148 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-05-31 20:57:35.827159 | orchestrator | 2025-05-31 20:57:35.827169 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-05-31 20:57:35.827180 | orchestrator | Saturday 31 May 2025 20:55:45 +0000 (0:00:01.424) 0:00:29.439 ********** 2025-05-31 20:57:35.827192 | orchestrator | skipping: [testbed-node-0] 2025-05-31 20:57:35.827203 | orchestrator | skipping: [testbed-node-1] 2025-05-31 20:57:35.827213 | orchestrator | skipping: [testbed-node-2] 2025-05-31 20:57:35.827224 | orchestrator | 2025-05-31 20:57:35.827235 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-05-31 20:57:35.827246 | orchestrator | Saturday 31 May 2025 20:55:45 +0000 (0:00:00.390) 0:00:29.829 ********** 2025-05-31 20:57:35.827258 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-31 20:57:35.827281 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-31 20:57:35.827295 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-31 20:57:35.827314 | orchestrator | 2025-05-31 20:57:35.827326 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-05-31 20:57:35.827336 | orchestrator | Saturday 31 May 2025 20:55:47 +0000 (0:00:01.503) 0:00:31.333 ********** 2025-05-31 20:57:35.827346 | orchestrator | changed: [testbed-node-0] 2025-05-31 20:57:35.827355 | orchestrator | changed: [testbed-node-1] 2025-05-31 20:57:35.827365 | orchestrator | changed: [testbed-node-2] 2025-05-31 20:57:35.827374 | orchestrator | 2025-05-31 20:57:35.827383 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-05-31 20:57:35.827393 | orchestrator | Saturday 31 May 2025 20:55:48 +0000 (0:00:00.971) 0:00:32.304 ********** 2025-05-31 20:57:35.827403 | orchestrator | changed: [testbed-node-0] 2025-05-31 20:57:35.827412 | orchestrator | changed: [testbed-node-1] 2025-05-31 20:57:35.827422 | orchestrator | changed: [testbed-node-2] 2025-05-31 20:57:35.827431 | orchestrator | 2025-05-31 20:57:35.827440 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-05-31 20:57:35.827450 | orchestrator | Saturday 31 May 2025 20:55:56 +0000 (0:00:08.222) 0:00:40.527 ********** 2025-05-31 20:57:35.827459 | orchestrator | changed: [testbed-node-0] 2025-05-31 20:57:35.827469 | orchestrator | changed: [testbed-node-1] 2025-05-31 20:57:35.827478 | orchestrator | changed: [testbed-node-2] 2025-05-31 20:57:35.827488 | orchestrator | 2025-05-31 20:57:35.827497 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-05-31 20:57:35.827506 | orchestrator | 2025-05-31 20:57:35.827516 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-05-31 20:57:35.827525 | orchestrator | Saturday 31 May 2025 20:55:56 +0000 (0:00:00.275) 0:00:40.802 ********** 2025-05-31 20:57:35.827535 | orchestrator | ok: [testbed-node-0] 2025-05-31 20:57:35.827588 | orchestrator | 2025-05-31 20:57:35.827599 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-05-31 20:57:35.827609 | orchestrator | Saturday 31 May 2025 20:55:57 +0000 (0:00:00.589) 0:00:41.392 ********** 2025-05-31 20:57:35.827624 | orchestrator | skipping: [testbed-node-0] 2025-05-31 20:57:35.827639 | orchestrator | 2025-05-31 20:57:35.827654 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-05-31 20:57:35.827671 | orchestrator | Saturday 31 May 2025 20:55:57 +0000 (0:00:00.273) 0:00:41.666 ********** 2025-05-31 20:57:35.827687 | orchestrator | changed: [testbed-node-0] 2025-05-31 20:57:35.827705 | orchestrator | 2025-05-31 20:57:35.827724 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-05-31 20:57:35.827736 | orchestrator | Saturday 31 May 2025 20:55:59 +0000 (0:00:01.538) 0:00:43.204 ********** 2025-05-31 20:57:35.827746 | orchestrator | changed: [testbed-node-0] 2025-05-31 20:57:35.827756 | orchestrator | 2025-05-31 20:57:35.827765 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-05-31 20:57:35.827774 | orchestrator | 2025-05-31 20:57:35.827784 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-05-31 20:57:35.827793 | orchestrator | Saturday 31 May 2025 20:56:55 +0000 (0:00:55.807) 0:01:39.012 ********** 2025-05-31 20:57:35.827803 | orchestrator | ok: [testbed-node-1] 2025-05-31 20:57:35.827834 | orchestrator | 2025-05-31 20:57:35.827845 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-05-31 20:57:35.827855 | orchestrator | Saturday 31 May 2025 20:56:55 +0000 (0:00:00.553) 0:01:39.566 ********** 2025-05-31 20:57:35.827864 | orchestrator | skipping: [testbed-node-1] 2025-05-31 20:57:35.827874 | orchestrator | 2025-05-31 20:57:35.827884 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-05-31 20:57:35.827908 | orchestrator | Saturday 31 May 2025 20:56:55 +0000 (0:00:00.357) 0:01:39.923 ********** 2025-05-31 20:57:35.827947 | orchestrator | changed: [testbed-node-1] 2025-05-31 20:57:35.827958 | orchestrator | 2025-05-31 20:57:35.827971 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-05-31 20:57:35.827987 | orchestrator | Saturday 31 May 2025 20:56:57 +0000 (0:00:01.867) 0:01:41.790 ********** 2025-05-31 20:57:35.828004 | orchestrator | changed: [testbed-node-1] 2025-05-31 20:57:35.828023 | orchestrator | 2025-05-31 20:57:35.828048 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-05-31 20:57:35.828059 | orchestrator | 2025-05-31 20:57:35.828068 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-05-31 20:57:35.828078 | orchestrator | Saturday 31 May 2025 20:57:12 +0000 (0:00:14.584) 0:01:56.375 ********** 2025-05-31 20:57:35.828087 | orchestrator | ok: [testbed-node-2] 2025-05-31 20:57:35.828097 | orchestrator | 2025-05-31 20:57:35.828114 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-05-31 20:57:35.828124 | orchestrator | Saturday 31 May 2025 20:57:12 +0000 (0:00:00.568) 0:01:56.944 ********** 2025-05-31 20:57:35.828133 | orchestrator | skipping: [testbed-node-2] 2025-05-31 20:57:35.828143 | orchestrator | 2025-05-31 20:57:35.828152 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-05-31 20:57:35.828162 | orchestrator | Saturday 31 May 2025 20:57:13 +0000 (0:00:00.260) 0:01:57.204 ********** 2025-05-31 20:57:35.828171 | orchestrator | changed: [testbed-node-2] 2025-05-31 20:57:35.828181 | orchestrator | 2025-05-31 20:57:35.828190 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-05-31 20:57:35.828200 | orchestrator | Saturday 31 May 2025 20:57:20 +0000 (0:00:07.067) 0:02:04.272 ********** 2025-05-31 20:57:35.828210 | orchestrator | changed: [testbed-node-2] 2025-05-31 20:57:35.828219 | orchestrator | 2025-05-31 20:57:35.828229 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-05-31 20:57:35.828238 | orchestrator | 2025-05-31 20:57:35.828248 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-05-31 20:57:35.828257 | orchestrator | Saturday 31 May 2025 20:57:29 +0000 (0:00:09.313) 0:02:13.585 ********** 2025-05-31 20:57:35.828267 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 20:57:35.828277 | orchestrator | 2025-05-31 20:57:35.828286 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-05-31 20:57:35.828296 | orchestrator | Saturday 31 May 2025 20:57:30 +0000 (0:00:01.043) 0:02:14.629 ********** 2025-05-31 20:57:35.828305 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-05-31 20:57:35.828315 | orchestrator | enable_outward_rabbitmq_True 2025-05-31 20:57:35.828324 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-05-31 20:57:35.828334 | orchestrator | outward_rabbitmq_restart 2025-05-31 20:57:35.828343 | orchestrator | ok: [testbed-node-1] 2025-05-31 20:57:35.828353 | orchestrator | ok: [testbed-node-2] 2025-05-31 20:57:35.828363 | orchestrator | ok: [testbed-node-0] 2025-05-31 20:57:35.828372 | orchestrator | 2025-05-31 20:57:35.828382 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-05-31 20:57:35.828391 | orchestrator | skipping: no hosts matched 2025-05-31 20:57:35.828401 | orchestrator | 2025-05-31 20:57:35.828410 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-05-31 20:57:35.828420 | orchestrator | skipping: no hosts matched 2025-05-31 20:57:35.828429 | orchestrator | 2025-05-31 20:57:35.828439 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-05-31 20:57:35.828448 | orchestrator | skipping: no hosts matched 2025-05-31 20:57:35.828458 | orchestrator | 2025-05-31 20:57:35.828467 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 20:57:35.828478 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-05-31 20:57:35.828496 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-05-31 20:57:35.828506 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-31 20:57:35.828516 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-31 20:57:35.828526 | orchestrator | 2025-05-31 20:57:35.828535 | orchestrator | 2025-05-31 20:57:35.828545 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-31 20:57:35.828554 | orchestrator | Saturday 31 May 2025 20:57:33 +0000 (0:00:02.364) 0:02:16.994 ********** 2025-05-31 20:57:35.828564 | orchestrator | =============================================================================== 2025-05-31 20:57:35.828573 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 79.71s 2025-05-31 20:57:35.828583 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 10.47s 2025-05-31 20:57:35.828592 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 8.22s 2025-05-31 20:57:35.828602 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.69s 2025-05-31 20:57:35.828611 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 3.03s 2025-05-31 20:57:35.828621 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.97s 2025-05-31 20:57:35.828630 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.36s 2025-05-31 20:57:35.828640 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.04s 2025-05-31 20:57:35.828649 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.93s 2025-05-31 20:57:35.828659 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.92s 2025-05-31 20:57:35.828668 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.71s 2025-05-31 20:57:35.828677 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.68s 2025-05-31 20:57:35.828687 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.59s 2025-05-31 20:57:35.828696 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.50s 2025-05-31 20:57:35.828706 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.45s 2025-05-31 20:57:35.828715 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.42s 2025-05-31 20:57:35.828725 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.14s 2025-05-31 20:57:35.828740 | orchestrator | Include rabbitmq post-deploy.yml ---------------------------------------- 1.04s 2025-05-31 20:57:35.828749 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.00s 2025-05-31 20:57:35.828759 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 0.97s 2025-05-31 20:57:35.828769 | orchestrator | 2025-05-31 20:57:35 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:57:35.828779 | orchestrator | 2025-05-31 20:57:35 | INFO  | Task b964d5a7-5bef-4a3e-97e3-ff548ef05d1f is in state STARTED 2025-05-31 20:57:35.828932 | orchestrator | 2025-05-31 20:57:35 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:57:35.828949 | orchestrator | 2025-05-31 20:57:35 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:57:38.861068 | orchestrator | 2025-05-31 20:57:38 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:57:38.861175 | orchestrator | 2025-05-31 20:57:38 | INFO  | Task b964d5a7-5bef-4a3e-97e3-ff548ef05d1f is in state STARTED 2025-05-31 20:57:38.861852 | orchestrator | 2025-05-31 20:57:38 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:57:38.861920 | orchestrator | 2025-05-31 20:57:38 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:57:41.918750 | orchestrator | 2025-05-31 20:57:41 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:57:41.918907 | orchestrator | 2025-05-31 20:57:41 | INFO  | Task b964d5a7-5bef-4a3e-97e3-ff548ef05d1f is in state STARTED 2025-05-31 20:57:41.918924 | orchestrator | 2025-05-31 20:57:41 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:57:41.918935 | orchestrator | 2025-05-31 20:57:41 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:57:44.953156 | orchestrator | 2025-05-31 20:57:44 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:57:44.954481 | orchestrator | 2025-05-31 20:57:44 | INFO  | Task b964d5a7-5bef-4a3e-97e3-ff548ef05d1f is in state STARTED 2025-05-31 20:57:44.958124 | orchestrator | 2025-05-31 20:57:44 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:57:44.958212 | orchestrator | 2025-05-31 20:57:44 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:57:48.008083 | orchestrator | 2025-05-31 20:57:48 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:57:48.008202 | orchestrator | 2025-05-31 20:57:48 | INFO  | Task b964d5a7-5bef-4a3e-97e3-ff548ef05d1f is in state STARTED 2025-05-31 20:57:48.011458 | orchestrator | 2025-05-31 20:57:48 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:57:48.011534 | orchestrator | 2025-05-31 20:57:48 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:57:51.064105 | orchestrator | 2025-05-31 20:57:51 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:57:51.064212 | orchestrator | 2025-05-31 20:57:51 | INFO  | Task b964d5a7-5bef-4a3e-97e3-ff548ef05d1f is in state STARTED 2025-05-31 20:57:51.065367 | orchestrator | 2025-05-31 20:57:51 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:57:51.065396 | orchestrator | 2025-05-31 20:57:51 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:57:54.108730 | orchestrator | 2025-05-31 20:57:54 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:57:54.110398 | orchestrator | 2025-05-31 20:57:54 | INFO  | Task b964d5a7-5bef-4a3e-97e3-ff548ef05d1f is in state STARTED 2025-05-31 20:57:54.112352 | orchestrator | 2025-05-31 20:57:54 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:57:54.112406 | orchestrator | 2025-05-31 20:57:54 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:57:57.159175 | orchestrator | 2025-05-31 20:57:57 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:57:57.161242 | orchestrator | 2025-05-31 20:57:57 | INFO  | Task b964d5a7-5bef-4a3e-97e3-ff548ef05d1f is in state STARTED 2025-05-31 20:57:57.164678 | orchestrator | 2025-05-31 20:57:57 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:57:57.164733 | orchestrator | 2025-05-31 20:57:57 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:58:00.208582 | orchestrator | 2025-05-31 20:58:00 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:58:00.213685 | orchestrator | 2025-05-31 20:58:00 | INFO  | Task b964d5a7-5bef-4a3e-97e3-ff548ef05d1f is in state STARTED 2025-05-31 20:58:00.214572 | orchestrator | 2025-05-31 20:58:00 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:58:00.214630 | orchestrator | 2025-05-31 20:58:00 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:58:03.261450 | orchestrator | 2025-05-31 20:58:03 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:58:03.261563 | orchestrator | 2025-05-31 20:58:03 | INFO  | Task b964d5a7-5bef-4a3e-97e3-ff548ef05d1f is in state STARTED 2025-05-31 20:58:03.262090 | orchestrator | 2025-05-31 20:58:03 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:58:03.262407 | orchestrator | 2025-05-31 20:58:03 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:58:06.300042 | orchestrator | 2025-05-31 20:58:06 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:58:06.300481 | orchestrator | 2025-05-31 20:58:06 | INFO  | Task b964d5a7-5bef-4a3e-97e3-ff548ef05d1f is in state STARTED 2025-05-31 20:58:06.304280 | orchestrator | 2025-05-31 20:58:06 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:58:06.304312 | orchestrator | 2025-05-31 20:58:06 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:58:09.355071 | orchestrator | 2025-05-31 20:58:09 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:58:09.356513 | orchestrator | 2025-05-31 20:58:09 | INFO  | Task b964d5a7-5bef-4a3e-97e3-ff548ef05d1f is in state STARTED 2025-05-31 20:58:09.358843 | orchestrator | 2025-05-31 20:58:09 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:58:09.358873 | orchestrator | 2025-05-31 20:58:09 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:58:12.405614 | orchestrator | 2025-05-31 20:58:12 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:58:12.407422 | orchestrator | 2025-05-31 20:58:12 | INFO  | Task b964d5a7-5bef-4a3e-97e3-ff548ef05d1f is in state STARTED 2025-05-31 20:58:12.409421 | orchestrator | 2025-05-31 20:58:12 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:58:12.409610 | orchestrator | 2025-05-31 20:58:12 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:58:15.455159 | orchestrator | 2025-05-31 20:58:15 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:58:15.457311 | orchestrator | 2025-05-31 20:58:15 | INFO  | Task b964d5a7-5bef-4a3e-97e3-ff548ef05d1f is in state STARTED 2025-05-31 20:58:15.459110 | orchestrator | 2025-05-31 20:58:15 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:58:15.459183 | orchestrator | 2025-05-31 20:58:15 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:58:18.500350 | orchestrator | 2025-05-31 20:58:18 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:58:18.505259 | orchestrator | 2025-05-31 20:58:18 | INFO  | Task b964d5a7-5bef-4a3e-97e3-ff548ef05d1f is in state STARTED 2025-05-31 20:58:18.505306 | orchestrator | 2025-05-31 20:58:18 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:58:18.505319 | orchestrator | 2025-05-31 20:58:18 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:58:21.557817 | orchestrator | 2025-05-31 20:58:21 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:58:21.558180 | orchestrator | 2025-05-31 20:58:21 | INFO  | Task b964d5a7-5bef-4a3e-97e3-ff548ef05d1f is in state STARTED 2025-05-31 20:58:21.562360 | orchestrator | 2025-05-31 20:58:21 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:58:21.562441 | orchestrator | 2025-05-31 20:58:21 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:58:24.604387 | orchestrator | 2025-05-31 20:58:24 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:58:24.607915 | orchestrator | 2025-05-31 20:58:24 | INFO  | Task b964d5a7-5bef-4a3e-97e3-ff548ef05d1f is in state STARTED 2025-05-31 20:58:24.611427 | orchestrator | 2025-05-31 20:58:24 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:58:24.611486 | orchestrator | 2025-05-31 20:58:24 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:58:27.655133 | orchestrator | 2025-05-31 20:58:27 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:58:27.656347 | orchestrator | 2025-05-31 20:58:27 | INFO  | Task b964d5a7-5bef-4a3e-97e3-ff548ef05d1f is in state STARTED 2025-05-31 20:58:27.658255 | orchestrator | 2025-05-31 20:58:27 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:58:27.658368 | orchestrator | 2025-05-31 20:58:27 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:58:30.691791 | orchestrator | 2025-05-31 20:58:30 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:58:30.695482 | orchestrator | 2025-05-31 20:58:30 | INFO  | Task b964d5a7-5bef-4a3e-97e3-ff548ef05d1f is in state STARTED 2025-05-31 20:58:30.697647 | orchestrator | 2025-05-31 20:58:30 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:58:30.697800 | orchestrator | 2025-05-31 20:58:30 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:58:33.739814 | orchestrator | 2025-05-31 20:58:33 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:58:33.742176 | orchestrator | 2025-05-31 20:58:33 | INFO  | Task b964d5a7-5bef-4a3e-97e3-ff548ef05d1f is in state SUCCESS 2025-05-31 20:58:33.743703 | orchestrator | 2025-05-31 20:58:33.743746 | orchestrator | 2025-05-31 20:58:33.743758 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-31 20:58:33.743771 | orchestrator | 2025-05-31 20:58:33.743782 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-31 20:58:33.743795 | orchestrator | Saturday 31 May 2025 20:56:06 +0000 (0:00:00.185) 0:00:00.185 ********** 2025-05-31 20:58:33.743871 | orchestrator | ok: [testbed-node-0] 2025-05-31 20:58:33.743886 | orchestrator | ok: [testbed-node-1] 2025-05-31 20:58:33.743898 | orchestrator | ok: [testbed-node-2] 2025-05-31 20:58:33.743908 | orchestrator | ok: [testbed-node-3] 2025-05-31 20:58:33.743919 | orchestrator | ok: [testbed-node-4] 2025-05-31 20:58:33.743930 | orchestrator | ok: [testbed-node-5] 2025-05-31 20:58:33.743940 | orchestrator | 2025-05-31 20:58:33.743951 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-31 20:58:33.743963 | orchestrator | Saturday 31 May 2025 20:56:07 +0000 (0:00:00.782) 0:00:00.968 ********** 2025-05-31 20:58:33.743974 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-05-31 20:58:33.743985 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-05-31 20:58:33.743996 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-05-31 20:58:33.744007 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-05-31 20:58:33.744018 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-05-31 20:58:33.744028 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-05-31 20:58:33.744039 | orchestrator | 2025-05-31 20:58:33.744050 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-05-31 20:58:33.744061 | orchestrator | 2025-05-31 20:58:33.744163 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-05-31 20:58:33.744175 | orchestrator | Saturday 31 May 2025 20:56:09 +0000 (0:00:01.534) 0:00:02.503 ********** 2025-05-31 20:58:33.744187 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 20:58:33.744224 | orchestrator | 2025-05-31 20:58:33.744236 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-05-31 20:58:33.744247 | orchestrator | Saturday 31 May 2025 20:56:10 +0000 (0:00:01.207) 0:00:03.710 ********** 2025-05-31 20:58:33.744260 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:58:33.744275 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:58:33.744286 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:58:33.744313 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:58:33.744327 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:58:33.744340 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:58:33.744352 | orchestrator | 2025-05-31 20:58:33.744379 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-05-31 20:58:33.744392 | orchestrator | Saturday 31 May 2025 20:56:12 +0000 (0:00:01.913) 0:00:05.623 ********** 2025-05-31 20:58:33.744405 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:58:33.744418 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:58:33.744438 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:58:33.744451 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:58:33.744464 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:58:33.744476 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:58:33.744489 | orchestrator | 2025-05-31 20:58:33.744501 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-05-31 20:58:33.744514 | orchestrator | Saturday 31 May 2025 20:56:14 +0000 (0:00:02.268) 0:00:07.891 ********** 2025-05-31 20:58:33.744532 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:58:33.744545 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:58:33.744604 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:58:33.744619 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:58:33.744632 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:58:33.744651 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:58:33.744662 | orchestrator | 2025-05-31 20:58:33.744673 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-05-31 20:58:33.744684 | orchestrator | Saturday 31 May 2025 20:56:15 +0000 (0:00:01.303) 0:00:09.195 ********** 2025-05-31 20:58:33.744695 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:58:33.744706 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:58:33.744717 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:58:33.744733 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:58:33.744745 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:58:33.744756 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:58:33.744767 | orchestrator | 2025-05-31 20:58:33.744784 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-05-31 20:58:33.744795 | orchestrator | Saturday 31 May 2025 20:56:17 +0000 (0:00:01.474) 0:00:10.670 ********** 2025-05-31 20:58:33.744813 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:58:33.744824 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:58:33.744835 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:58:33.744868 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:58:33.744879 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:58:33.744890 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:58:33.744901 | orchestrator | 2025-05-31 20:58:33.744911 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-05-31 20:58:33.744922 | orchestrator | Saturday 31 May 2025 20:56:19 +0000 (0:00:02.040) 0:00:12.711 ********** 2025-05-31 20:58:33.744934 | orchestrator | changed: [testbed-node-1] 2025-05-31 20:58:33.744950 | orchestrator | changed: [testbed-node-2] 2025-05-31 20:58:33.744961 | orchestrator | changed: [testbed-node-3] 2025-05-31 20:58:33.744972 | orchestrator | changed: [testbed-node-0] 2025-05-31 20:58:33.744983 | orchestrator | changed: [testbed-node-4] 2025-05-31 20:58:33.744993 | orchestrator | changed: [testbed-node-5] 2025-05-31 20:58:33.745004 | orchestrator | 2025-05-31 20:58:33.745015 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-05-31 20:58:33.745026 | orchestrator | Saturday 31 May 2025 20:56:21 +0000 (0:00:02.548) 0:00:15.259 ********** 2025-05-31 20:58:33.745036 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-05-31 20:58:33.745047 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-05-31 20:58:33.745058 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-05-31 20:58:33.745069 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-05-31 20:58:33.745086 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-05-31 20:58:33.745097 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-05-31 20:58:33.745108 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-31 20:58:33.745118 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-31 20:58:33.745135 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-31 20:58:33.745146 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-31 20:58:33.745157 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-31 20:58:33.745167 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-31 20:58:33.745178 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-31 20:58:33.745191 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-31 20:58:33.745202 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-31 20:58:33.745213 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-31 20:58:33.745224 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-31 20:58:33.745234 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-31 20:58:33.745246 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-31 20:58:33.745257 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-31 20:58:33.745267 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-31 20:58:33.745278 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-31 20:58:33.745289 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-31 20:58:33.745299 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-31 20:58:33.745310 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-31 20:58:33.745321 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-31 20:58:33.745331 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-31 20:58:33.745342 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-31 20:58:33.745352 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-31 20:58:33.745363 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-31 20:58:33.745374 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-31 20:58:33.745384 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-31 20:58:33.745395 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-31 20:58:33.745406 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-31 20:58:33.745423 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-31 20:58:33.745438 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-31 20:58:33.745449 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-05-31 20:58:33.745460 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-05-31 20:58:33.745471 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-05-31 20:58:33.745482 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-05-31 20:58:33.745492 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-05-31 20:58:33.745503 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-05-31 20:58:33.745514 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-05-31 20:58:33.745524 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-05-31 20:58:33.745541 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-05-31 20:58:33.745552 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-05-31 20:58:33.745563 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-05-31 20:58:33.745573 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-05-31 20:58:33.745584 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-05-31 20:58:33.745595 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-05-31 20:58:33.745606 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-05-31 20:58:33.745616 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-05-31 20:58:33.745627 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-05-31 20:58:33.745638 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-05-31 20:58:33.745649 | orchestrator | 2025-05-31 20:58:33.745660 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-31 20:58:33.745670 | orchestrator | Saturday 31 May 2025 20:56:40 +0000 (0:00:18.301) 0:00:33.561 ********** 2025-05-31 20:58:33.745681 | orchestrator | 2025-05-31 20:58:33.745692 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-31 20:58:33.745703 | orchestrator | Saturday 31 May 2025 20:56:40 +0000 (0:00:00.136) 0:00:33.697 ********** 2025-05-31 20:58:33.745713 | orchestrator | 2025-05-31 20:58:33.745724 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-31 20:58:33.745734 | orchestrator | Saturday 31 May 2025 20:56:40 +0000 (0:00:00.154) 0:00:33.852 ********** 2025-05-31 20:58:33.745745 | orchestrator | 2025-05-31 20:58:33.745756 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-31 20:58:33.745766 | orchestrator | Saturday 31 May 2025 20:56:40 +0000 (0:00:00.136) 0:00:33.989 ********** 2025-05-31 20:58:33.745784 | orchestrator | 2025-05-31 20:58:33.745795 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-31 20:58:33.745805 | orchestrator | Saturday 31 May 2025 20:56:40 +0000 (0:00:00.133) 0:00:34.122 ********** 2025-05-31 20:58:33.745816 | orchestrator | 2025-05-31 20:58:33.745827 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-31 20:58:33.745837 | orchestrator | Saturday 31 May 2025 20:56:40 +0000 (0:00:00.144) 0:00:34.267 ********** 2025-05-31 20:58:33.745863 | orchestrator | 2025-05-31 20:58:33.745874 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-05-31 20:58:33.745885 | orchestrator | Saturday 31 May 2025 20:56:41 +0000 (0:00:00.148) 0:00:34.415 ********** 2025-05-31 20:58:33.745896 | orchestrator | ok: [testbed-node-3] 2025-05-31 20:58:33.745906 | orchestrator | ok: [testbed-node-2] 2025-05-31 20:58:33.745917 | orchestrator | ok: [testbed-node-1] 2025-05-31 20:58:33.745928 | orchestrator | ok: [testbed-node-4] 2025-05-31 20:58:33.745939 | orchestrator | ok: [testbed-node-0] 2025-05-31 20:58:33.745949 | orchestrator | ok: [testbed-node-5] 2025-05-31 20:58:33.745960 | orchestrator | 2025-05-31 20:58:33.745971 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-05-31 20:58:33.745981 | orchestrator | Saturday 31 May 2025 20:56:43 +0000 (0:00:02.111) 0:00:36.527 ********** 2025-05-31 20:58:33.745992 | orchestrator | changed: [testbed-node-0] 2025-05-31 20:58:33.746003 | orchestrator | changed: [testbed-node-1] 2025-05-31 20:58:33.746056 | orchestrator | changed: [testbed-node-4] 2025-05-31 20:58:33.746071 | orchestrator | changed: [testbed-node-3] 2025-05-31 20:58:33.746082 | orchestrator | changed: [testbed-node-2] 2025-05-31 20:58:33.746098 | orchestrator | changed: [testbed-node-5] 2025-05-31 20:58:33.746110 | orchestrator | 2025-05-31 20:58:33.746120 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-05-31 20:58:33.746131 | orchestrator | 2025-05-31 20:58:33.746142 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-05-31 20:58:33.746153 | orchestrator | Saturday 31 May 2025 20:57:21 +0000 (0:00:38.615) 0:01:15.142 ********** 2025-05-31 20:58:33.746163 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 20:58:33.746175 | orchestrator | 2025-05-31 20:58:33.746185 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-05-31 20:58:33.746196 | orchestrator | Saturday 31 May 2025 20:57:22 +0000 (0:00:00.533) 0:01:15.675 ********** 2025-05-31 20:58:33.746207 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 20:58:33.746218 | orchestrator | 2025-05-31 20:58:33.746229 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-05-31 20:58:33.746240 | orchestrator | Saturday 31 May 2025 20:57:23 +0000 (0:00:00.738) 0:01:16.414 ********** 2025-05-31 20:58:33.746250 | orchestrator | ok: [testbed-node-2] 2025-05-31 20:58:33.746261 | orchestrator | ok: [testbed-node-1] 2025-05-31 20:58:33.746272 | orchestrator | ok: [testbed-node-0] 2025-05-31 20:58:33.746282 | orchestrator | 2025-05-31 20:58:33.746293 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-05-31 20:58:33.746304 | orchestrator | Saturday 31 May 2025 20:57:23 +0000 (0:00:00.792) 0:01:17.206 ********** 2025-05-31 20:58:33.746314 | orchestrator | ok: [testbed-node-0] 2025-05-31 20:58:33.746325 | orchestrator | ok: [testbed-node-1] 2025-05-31 20:58:33.746335 | orchestrator | ok: [testbed-node-2] 2025-05-31 20:58:33.746353 | orchestrator | 2025-05-31 20:58:33.746364 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-05-31 20:58:33.746375 | orchestrator | Saturday 31 May 2025 20:57:24 +0000 (0:00:00.361) 0:01:17.568 ********** 2025-05-31 20:58:33.746385 | orchestrator | ok: [testbed-node-0] 2025-05-31 20:58:33.746396 | orchestrator | ok: [testbed-node-1] 2025-05-31 20:58:33.746407 | orchestrator | ok: [testbed-node-2] 2025-05-31 20:58:33.746417 | orchestrator | 2025-05-31 20:58:33.746435 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-05-31 20:58:33.746446 | orchestrator | Saturday 31 May 2025 20:57:24 +0000 (0:00:00.401) 0:01:17.969 ********** 2025-05-31 20:58:33.746457 | orchestrator | ok: [testbed-node-0] 2025-05-31 20:58:33.746468 | orchestrator | ok: [testbed-node-1] 2025-05-31 20:58:33.746479 | orchestrator | ok: [testbed-node-2] 2025-05-31 20:58:33.746489 | orchestrator | 2025-05-31 20:58:33.746500 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-05-31 20:58:33.746511 | orchestrator | Saturday 31 May 2025 20:57:25 +0000 (0:00:00.523) 0:01:18.493 ********** 2025-05-31 20:58:33.746521 | orchestrator | ok: [testbed-node-0] 2025-05-31 20:58:33.746532 | orchestrator | ok: [testbed-node-1] 2025-05-31 20:58:33.746543 | orchestrator | ok: [testbed-node-2] 2025-05-31 20:58:33.746553 | orchestrator | 2025-05-31 20:58:33.746564 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-05-31 20:58:33.746575 | orchestrator | Saturday 31 May 2025 20:57:25 +0000 (0:00:00.409) 0:01:18.902 ********** 2025-05-31 20:58:33.746585 | orchestrator | skipping: [testbed-node-0] 2025-05-31 20:58:33.746596 | orchestrator | skipping: [testbed-node-1] 2025-05-31 20:58:33.746607 | orchestrator | skipping: [testbed-node-2] 2025-05-31 20:58:33.746617 | orchestrator | 2025-05-31 20:58:33.746628 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-05-31 20:58:33.746638 | orchestrator | Saturday 31 May 2025 20:57:25 +0000 (0:00:00.280) 0:01:19.182 ********** 2025-05-31 20:58:33.746649 | orchestrator | skipping: [testbed-node-0] 2025-05-31 20:58:33.746660 | orchestrator | skipping: [testbed-node-1] 2025-05-31 20:58:33.746670 | orchestrator | skipping: [testbed-node-2] 2025-05-31 20:58:33.746681 | orchestrator | 2025-05-31 20:58:33.746692 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-05-31 20:58:33.746702 | orchestrator | Saturday 31 May 2025 20:57:26 +0000 (0:00:00.327) 0:01:19.510 ********** 2025-05-31 20:58:33.746713 | orchestrator | skipping: [testbed-node-0] 2025-05-31 20:58:33.746723 | orchestrator | skipping: [testbed-node-1] 2025-05-31 20:58:33.746734 | orchestrator | skipping: [testbed-node-2] 2025-05-31 20:58:33.746745 | orchestrator | 2025-05-31 20:58:33.746755 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-05-31 20:58:33.746766 | orchestrator | Saturday 31 May 2025 20:57:26 +0000 (0:00:00.478) 0:01:19.989 ********** 2025-05-31 20:58:33.746776 | orchestrator | skipping: [testbed-node-0] 2025-05-31 20:58:33.746787 | orchestrator | skipping: [testbed-node-1] 2025-05-31 20:58:33.746797 | orchestrator | skipping: [testbed-node-2] 2025-05-31 20:58:33.746808 | orchestrator | 2025-05-31 20:58:33.746819 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-05-31 20:58:33.746829 | orchestrator | Saturday 31 May 2025 20:57:26 +0000 (0:00:00.307) 0:01:20.296 ********** 2025-05-31 20:58:33.746897 | orchestrator | skipping: [testbed-node-0] 2025-05-31 20:58:33.746910 | orchestrator | skipping: [testbed-node-1] 2025-05-31 20:58:33.746920 | orchestrator | skipping: [testbed-node-2] 2025-05-31 20:58:33.746931 | orchestrator | 2025-05-31 20:58:33.746942 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-05-31 20:58:33.746976 | orchestrator | Saturday 31 May 2025 20:57:27 +0000 (0:00:00.369) 0:01:20.666 ********** 2025-05-31 20:58:33.746987 | orchestrator | skipping: [testbed-node-0] 2025-05-31 20:58:33.746998 | orchestrator | skipping: [testbed-node-1] 2025-05-31 20:58:33.747009 | orchestrator | skipping: [testbed-node-2] 2025-05-31 20:58:33.747019 | orchestrator | 2025-05-31 20:58:33.747030 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-05-31 20:58:33.747040 | orchestrator | Saturday 31 May 2025 20:57:27 +0000 (0:00:00.584) 0:01:21.250 ********** 2025-05-31 20:58:33.747051 | orchestrator | skipping: [testbed-node-0] 2025-05-31 20:58:33.747062 | orchestrator | skipping: [testbed-node-1] 2025-05-31 20:58:33.747072 | orchestrator | skipping: [testbed-node-2] 2025-05-31 20:58:33.747083 | orchestrator | 2025-05-31 20:58:33.747093 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-05-31 20:58:33.747121 | orchestrator | Saturday 31 May 2025 20:57:28 +0000 (0:00:00.937) 0:01:22.188 ********** 2025-05-31 20:58:33.747132 | orchestrator | skipping: [testbed-node-0] 2025-05-31 20:58:33.747147 | orchestrator | skipping: [testbed-node-1] 2025-05-31 20:58:33.747157 | orchestrator | skipping: [testbed-node-2] 2025-05-31 20:58:33.747167 | orchestrator | 2025-05-31 20:58:33.747176 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-05-31 20:58:33.747186 | orchestrator | Saturday 31 May 2025 20:57:29 +0000 (0:00:00.546) 0:01:22.735 ********** 2025-05-31 20:58:33.747195 | orchestrator | skipping: [testbed-node-0] 2025-05-31 20:58:33.747205 | orchestrator | skipping: [testbed-node-1] 2025-05-31 20:58:33.747214 | orchestrator | skipping: [testbed-node-2] 2025-05-31 20:58:33.747223 | orchestrator | 2025-05-31 20:58:33.747233 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-05-31 20:58:33.747242 | orchestrator | Saturday 31 May 2025 20:57:29 +0000 (0:00:00.421) 0:01:23.156 ********** 2025-05-31 20:58:33.747252 | orchestrator | skipping: [testbed-node-0] 2025-05-31 20:58:33.747261 | orchestrator | skipping: [testbed-node-1] 2025-05-31 20:58:33.747270 | orchestrator | skipping: [testbed-node-2] 2025-05-31 20:58:33.747280 | orchestrator | 2025-05-31 20:58:33.747289 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-05-31 20:58:33.747298 | orchestrator | Saturday 31 May 2025 20:57:30 +0000 (0:00:00.567) 0:01:23.724 ********** 2025-05-31 20:58:33.747308 | orchestrator | skipping: [testbed-node-0] 2025-05-31 20:58:33.747317 | orchestrator | skipping: [testbed-node-1] 2025-05-31 20:58:33.747326 | orchestrator | skipping: [testbed-node-2] 2025-05-31 20:58:33.747336 | orchestrator | 2025-05-31 20:58:33.747345 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-05-31 20:58:33.747355 | orchestrator | Saturday 31 May 2025 20:57:31 +0000 (0:00:00.728) 0:01:24.452 ********** 2025-05-31 20:58:33.747364 | orchestrator | skipping: [testbed-node-0] 2025-05-31 20:58:33.747374 | orchestrator | skipping: [testbed-node-1] 2025-05-31 20:58:33.747389 | orchestrator | skipping: [testbed-node-2] 2025-05-31 20:58:33.747398 | orchestrator | 2025-05-31 20:58:33.747408 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-05-31 20:58:33.747417 | orchestrator | Saturday 31 May 2025 20:57:31 +0000 (0:00:00.250) 0:01:24.702 ********** 2025-05-31 20:58:33.747427 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 20:58:33.747437 | orchestrator | 2025-05-31 20:58:33.747446 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-05-31 20:58:33.747456 | orchestrator | Saturday 31 May 2025 20:57:31 +0000 (0:00:00.485) 0:01:25.187 ********** 2025-05-31 20:58:33.747465 | orchestrator | ok: [testbed-node-0] 2025-05-31 20:58:33.747475 | orchestrator | ok: [testbed-node-1] 2025-05-31 20:58:33.747484 | orchestrator | ok: [testbed-node-2] 2025-05-31 20:58:33.747493 | orchestrator | 2025-05-31 20:58:33.747503 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-05-31 20:58:33.747512 | orchestrator | Saturday 31 May 2025 20:57:32 +0000 (0:00:00.727) 0:01:25.915 ********** 2025-05-31 20:58:33.747522 | orchestrator | ok: [testbed-node-0] 2025-05-31 20:58:33.747531 | orchestrator | ok: [testbed-node-1] 2025-05-31 20:58:33.747541 | orchestrator | ok: [testbed-node-2] 2025-05-31 20:58:33.747550 | orchestrator | 2025-05-31 20:58:33.747559 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-05-31 20:58:33.747569 | orchestrator | Saturday 31 May 2025 20:57:33 +0000 (0:00:00.528) 0:01:26.444 ********** 2025-05-31 20:58:33.747578 | orchestrator | skipping: [testbed-node-0] 2025-05-31 20:58:33.747588 | orchestrator | skipping: [testbed-node-1] 2025-05-31 20:58:33.747597 | orchestrator | skipping: [testbed-node-2] 2025-05-31 20:58:33.747607 | orchestrator | 2025-05-31 20:58:33.747616 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-05-31 20:58:33.747626 | orchestrator | Saturday 31 May 2025 20:57:33 +0000 (0:00:00.290) 0:01:26.735 ********** 2025-05-31 20:58:33.747641 | orchestrator | skipping: [testbed-node-0] 2025-05-31 20:58:33.747651 | orchestrator | skipping: [testbed-node-1] 2025-05-31 20:58:33.747660 | orchestrator | skipping: [testbed-node-2] 2025-05-31 20:58:33.747669 | orchestrator | 2025-05-31 20:58:33.747678 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-05-31 20:58:33.747688 | orchestrator | Saturday 31 May 2025 20:57:33 +0000 (0:00:00.245) 0:01:26.980 ********** 2025-05-31 20:58:33.747697 | orchestrator | skipping: [testbed-node-0] 2025-05-31 20:58:33.747707 | orchestrator | skipping: [testbed-node-1] 2025-05-31 20:58:33.747716 | orchestrator | skipping: [testbed-node-2] 2025-05-31 20:58:33.747725 | orchestrator | 2025-05-31 20:58:33.747735 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-05-31 20:58:33.747744 | orchestrator | Saturday 31 May 2025 20:57:34 +0000 (0:00:00.392) 0:01:27.372 ********** 2025-05-31 20:58:33.747753 | orchestrator | skipping: [testbed-node-1] 2025-05-31 20:58:33.747763 | orchestrator | skipping: [testbed-node-0] 2025-05-31 20:58:33.747772 | orchestrator | skipping: [testbed-node-2] 2025-05-31 20:58:33.747781 | orchestrator | 2025-05-31 20:58:33.747791 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-05-31 20:58:33.747801 | orchestrator | Saturday 31 May 2025 20:57:34 +0000 (0:00:00.250) 0:01:27.623 ********** 2025-05-31 20:58:33.747810 | orchestrator | skipping: [testbed-node-0] 2025-05-31 20:58:33.747819 | orchestrator | skipping: [testbed-node-1] 2025-05-31 20:58:33.747829 | orchestrator | skipping: [testbed-node-2] 2025-05-31 20:58:33.747838 | orchestrator | 2025-05-31 20:58:33.747863 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-05-31 20:58:33.747873 | orchestrator | Saturday 31 May 2025 20:57:34 +0000 (0:00:00.293) 0:01:27.916 ********** 2025-05-31 20:58:33.747882 | orchestrator | skipping: [testbed-node-0] 2025-05-31 20:58:33.747892 | orchestrator | skipping: [testbed-node-1] 2025-05-31 20:58:33.747901 | orchestrator | skipping: [testbed-node-2] 2025-05-31 20:58:33.747910 | orchestrator | 2025-05-31 20:58:33.747920 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-05-31 20:58:33.747929 | orchestrator | Saturday 31 May 2025 20:57:34 +0000 (0:00:00.292) 0:01:28.209 ********** 2025-05-31 20:58:33.747944 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:58:33.747956 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:58:33.747966 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:58:33.747982 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:58:33.747994 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:58:33.748010 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:58:33.748020 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:58:33.748030 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:58:33.748040 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:58:33.748050 | orchestrator | 2025-05-31 20:58:33.748059 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-05-31 20:58:33.748069 | orchestrator | Saturday 31 May 2025 20:57:36 +0000 (0:00:01.452) 0:01:29.661 ********** 2025-05-31 20:58:33.748079 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:58:33.748093 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:58:33.748103 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:58:33.748113 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:58:33.748128 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:58:33.748144 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:58:33.748153 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:58:33.748163 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:58:33.748173 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:58:33.748183 | orchestrator | 2025-05-31 20:58:33.748192 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-05-31 20:58:33.748202 | orchestrator | Saturday 31 May 2025 20:57:40 +0000 (0:00:03.925) 0:01:33.587 ********** 2025-05-31 20:58:33.748212 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:58:33.748221 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:58:33.748235 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:58:33.748245 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:58:33.748255 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:58:33.748278 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:58:33.748288 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:58:33.748298 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:58:33.748308 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:58:33.748317 | orchestrator | 2025-05-31 20:58:33.748327 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-31 20:58:33.748337 | orchestrator | Saturday 31 May 2025 20:57:42 +0000 (0:00:02.241) 0:01:35.828 ********** 2025-05-31 20:58:33.748346 | orchestrator | 2025-05-31 20:58:33.748356 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-31 20:58:33.748365 | orchestrator | Saturday 31 May 2025 20:57:42 +0000 (0:00:00.064) 0:01:35.892 ********** 2025-05-31 20:58:33.748375 | orchestrator | 2025-05-31 20:58:33.748384 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-31 20:58:33.748393 | orchestrator | Saturday 31 May 2025 20:57:42 +0000 (0:00:00.064) 0:01:35.957 ********** 2025-05-31 20:58:33.748403 | orchestrator | 2025-05-31 20:58:33.748412 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-05-31 20:58:33.748421 | orchestrator | Saturday 31 May 2025 20:57:42 +0000 (0:00:00.084) 0:01:36.042 ********** 2025-05-31 20:58:33.748431 | orchestrator | changed: [testbed-node-0] 2025-05-31 20:58:33.748441 | orchestrator | changed: [testbed-node-2] 2025-05-31 20:58:33.748450 | orchestrator | changed: [testbed-node-1] 2025-05-31 20:58:33.748459 | orchestrator | 2025-05-31 20:58:33.748469 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-05-31 20:58:33.748478 | orchestrator | Saturday 31 May 2025 20:57:45 +0000 (0:00:02.490) 0:01:38.533 ********** 2025-05-31 20:58:33.748488 | orchestrator | changed: [testbed-node-0] 2025-05-31 20:58:33.748498 | orchestrator | changed: [testbed-node-1] 2025-05-31 20:58:33.748507 | orchestrator | changed: [testbed-node-2] 2025-05-31 20:58:33.748516 | orchestrator | 2025-05-31 20:58:33.748526 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-05-31 20:58:33.748535 | orchestrator | Saturday 31 May 2025 20:57:47 +0000 (0:00:02.760) 0:01:41.293 ********** 2025-05-31 20:58:33.748545 | orchestrator | changed: [testbed-node-0] 2025-05-31 20:58:33.748554 | orchestrator | changed: [testbed-node-1] 2025-05-31 20:58:33.748564 | orchestrator | changed: [testbed-node-2] 2025-05-31 20:58:33.748573 | orchestrator | 2025-05-31 20:58:33.748582 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-05-31 20:58:33.748599 | orchestrator | Saturday 31 May 2025 20:57:55 +0000 (0:00:07.577) 0:01:48.871 ********** 2025-05-31 20:58:33.748608 | orchestrator | skipping: [testbed-node-0] 2025-05-31 20:58:33.748617 | orchestrator | 2025-05-31 20:58:33.748627 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-05-31 20:58:33.748641 | orchestrator | Saturday 31 May 2025 20:57:55 +0000 (0:00:00.134) 0:01:49.005 ********** 2025-05-31 20:58:33.748650 | orchestrator | ok: [testbed-node-0] 2025-05-31 20:58:33.748660 | orchestrator | ok: [testbed-node-2] 2025-05-31 20:58:33.748669 | orchestrator | ok: [testbed-node-1] 2025-05-31 20:58:33.748679 | orchestrator | 2025-05-31 20:58:33.748688 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-05-31 20:58:33.748697 | orchestrator | Saturday 31 May 2025 20:57:56 +0000 (0:00:00.735) 0:01:49.741 ********** 2025-05-31 20:58:33.748707 | orchestrator | skipping: [testbed-node-1] 2025-05-31 20:58:33.748716 | orchestrator | skipping: [testbed-node-2] 2025-05-31 20:58:33.748726 | orchestrator | changed: [testbed-node-0] 2025-05-31 20:58:33.748735 | orchestrator | 2025-05-31 20:58:33.748744 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-05-31 20:58:33.748754 | orchestrator | Saturday 31 May 2025 20:57:57 +0000 (0:00:00.830) 0:01:50.571 ********** 2025-05-31 20:58:33.748763 | orchestrator | ok: [testbed-node-0] 2025-05-31 20:58:33.748772 | orchestrator | ok: [testbed-node-1] 2025-05-31 20:58:33.748782 | orchestrator | ok: [testbed-node-2] 2025-05-31 20:58:33.748791 | orchestrator | 2025-05-31 20:58:33.748801 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-05-31 20:58:33.748810 | orchestrator | Saturday 31 May 2025 20:57:57 +0000 (0:00:00.748) 0:01:51.320 ********** 2025-05-31 20:58:33.748820 | orchestrator | skipping: [testbed-node-1] 2025-05-31 20:58:33.748829 | orchestrator | skipping: [testbed-node-2] 2025-05-31 20:58:33.748855 | orchestrator | changed: [testbed-node-0] 2025-05-31 20:58:33.748865 | orchestrator | 2025-05-31 20:58:33.748875 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-05-31 20:58:33.748884 | orchestrator | Saturday 31 May 2025 20:57:58 +0000 (0:00:00.642) 0:01:51.963 ********** 2025-05-31 20:58:33.748894 | orchestrator | ok: [testbed-node-2] 2025-05-31 20:58:33.748903 | orchestrator | ok: [testbed-node-1] 2025-05-31 20:58:33.748918 | orchestrator | ok: [testbed-node-0] 2025-05-31 20:58:33.748928 | orchestrator | 2025-05-31 20:58:33.748937 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-05-31 20:58:33.748947 | orchestrator | Saturday 31 May 2025 20:57:59 +0000 (0:00:00.806) 0:01:52.770 ********** 2025-05-31 20:58:33.748956 | orchestrator | ok: [testbed-node-0] 2025-05-31 20:58:33.748966 | orchestrator | ok: [testbed-node-1] 2025-05-31 20:58:33.748975 | orchestrator | ok: [testbed-node-2] 2025-05-31 20:58:33.748984 | orchestrator | 2025-05-31 20:58:33.748994 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-05-31 20:58:33.749003 | orchestrator | Saturday 31 May 2025 20:58:00 +0000 (0:00:01.243) 0:01:54.013 ********** 2025-05-31 20:58:33.749013 | orchestrator | ok: [testbed-node-0] 2025-05-31 20:58:33.749022 | orchestrator | ok: [testbed-node-1] 2025-05-31 20:58:33.749032 | orchestrator | ok: [testbed-node-2] 2025-05-31 20:58:33.749041 | orchestrator | 2025-05-31 20:58:33.749050 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-05-31 20:58:33.749060 | orchestrator | Saturday 31 May 2025 20:58:01 +0000 (0:00:00.454) 0:01:54.467 ********** 2025-05-31 20:58:33.749070 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:58:33.749080 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:58:33.749096 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:58:33.749107 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:58:33.749117 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:58:33.749132 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:58:33.749142 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:58:33.749152 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:58:33.749168 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:58:33.749178 | orchestrator | 2025-05-31 20:58:33.749187 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-05-31 20:58:33.749204 | orchestrator | Saturday 31 May 2025 20:58:02 +0000 (0:00:01.690) 0:01:56.158 ********** 2025-05-31 20:58:33.749223 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:58:33.749240 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:58:33.749266 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:58:33.749282 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:58:33.749298 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:58:33.749314 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:58:33.749330 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:58:33.749348 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:58:33.749366 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:58:33.749382 | orchestrator | 2025-05-31 20:58:33.749400 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-05-31 20:58:33.749495 | orchestrator | Saturday 31 May 2025 20:58:06 +0000 (0:00:03.973) 0:02:00.131 ********** 2025-05-31 20:58:33.749528 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:58:33.749539 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:58:33.749557 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:58:33.749567 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:58:33.749577 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:58:33.749588 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:58:33.749597 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:58:33.749607 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:58:33.749622 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 20:58:33.749632 | orchestrator | 2025-05-31 20:58:33.749642 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-31 20:58:33.749652 | orchestrator | Saturday 31 May 2025 20:58:09 +0000 (0:00:02.948) 0:02:03.080 ********** 2025-05-31 20:58:33.749661 | orchestrator | 2025-05-31 20:58:33.749671 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-31 20:58:33.749680 | orchestrator | Saturday 31 May 2025 20:58:09 +0000 (0:00:00.063) 0:02:03.143 ********** 2025-05-31 20:58:33.749690 | orchestrator | 2025-05-31 20:58:33.749699 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-31 20:58:33.749709 | orchestrator | Saturday 31 May 2025 20:58:09 +0000 (0:00:00.062) 0:02:03.206 ********** 2025-05-31 20:58:33.749718 | orchestrator | 2025-05-31 20:58:33.749728 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-05-31 20:58:33.749737 | orchestrator | Saturday 31 May 2025 20:58:09 +0000 (0:00:00.061) 0:02:03.267 ********** 2025-05-31 20:58:33.749747 | orchestrator | changed: [testbed-node-1] 2025-05-31 20:58:33.749765 | orchestrator | changed: [testbed-node-2] 2025-05-31 20:58:33.749790 | orchestrator | 2025-05-31 20:58:33.749814 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-05-31 20:58:33.749830 | orchestrator | Saturday 31 May 2025 20:58:16 +0000 (0:00:06.199) 0:02:09.467 ********** 2025-05-31 20:58:33.749873 | orchestrator | changed: [testbed-node-1] 2025-05-31 20:58:33.749892 | orchestrator | changed: [testbed-node-2] 2025-05-31 20:58:33.749909 | orchestrator | 2025-05-31 20:58:33.749922 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-05-31 20:58:33.749932 | orchestrator | Saturday 31 May 2025 20:58:22 +0000 (0:00:06.157) 0:02:15.625 ********** 2025-05-31 20:58:33.749941 | orchestrator | changed: [testbed-node-1] 2025-05-31 20:58:33.749951 | orchestrator | changed: [testbed-node-2] 2025-05-31 20:58:33.749960 | orchestrator | 2025-05-31 20:58:33.749970 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-05-31 20:58:33.749979 | orchestrator | Saturday 31 May 2025 20:58:28 +0000 (0:00:06.102) 0:02:21.727 ********** 2025-05-31 20:58:33.749989 | orchestrator | skipping: [testbed-node-0] 2025-05-31 20:58:33.749999 | orchestrator | 2025-05-31 20:58:33.750008 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-05-31 20:58:33.750087 | orchestrator | Saturday 31 May 2025 20:58:28 +0000 (0:00:00.141) 0:02:21.869 ********** 2025-05-31 20:58:33.750098 | orchestrator | ok: [testbed-node-0] 2025-05-31 20:58:33.750108 | orchestrator | ok: [testbed-node-1] 2025-05-31 20:58:33.750117 | orchestrator | ok: [testbed-node-2] 2025-05-31 20:58:33.750127 | orchestrator | 2025-05-31 20:58:33.750136 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-05-31 20:58:33.750146 | orchestrator | Saturday 31 May 2025 20:58:29 +0000 (0:00:01.059) 0:02:22.929 ********** 2025-05-31 20:58:33.750156 | orchestrator | skipping: [testbed-node-1] 2025-05-31 20:58:33.750165 | orchestrator | skipping: [testbed-node-2] 2025-05-31 20:58:33.750174 | orchestrator | changed: [testbed-node-0] 2025-05-31 20:58:33.750184 | orchestrator | 2025-05-31 20:58:33.750193 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-05-31 20:58:33.750203 | orchestrator | Saturday 31 May 2025 20:58:30 +0000 (0:00:00.584) 0:02:23.513 ********** 2025-05-31 20:58:33.750212 | orchestrator | ok: [testbed-node-0] 2025-05-31 20:58:33.750222 | orchestrator | ok: [testbed-node-1] 2025-05-31 20:58:33.750231 | orchestrator | ok: [testbed-node-2] 2025-05-31 20:58:33.750240 | orchestrator | 2025-05-31 20:58:33.750250 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-05-31 20:58:33.750260 | orchestrator | Saturday 31 May 2025 20:58:30 +0000 (0:00:00.729) 0:02:24.242 ********** 2025-05-31 20:58:33.750269 | orchestrator | skipping: [testbed-node-1] 2025-05-31 20:58:33.750279 | orchestrator | skipping: [testbed-node-2] 2025-05-31 20:58:33.750288 | orchestrator | changed: [testbed-node-0] 2025-05-31 20:58:33.750297 | orchestrator | 2025-05-31 20:58:33.750307 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-05-31 20:58:33.750317 | orchestrator | Saturday 31 May 2025 20:58:31 +0000 (0:00:00.561) 0:02:24.804 ********** 2025-05-31 20:58:33.750326 | orchestrator | ok: [testbed-node-0] 2025-05-31 20:58:33.750335 | orchestrator | ok: [testbed-node-1] 2025-05-31 20:58:33.750345 | orchestrator | ok: [testbed-node-2] 2025-05-31 20:58:33.750355 | orchestrator | 2025-05-31 20:58:33.750364 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-05-31 20:58:33.750374 | orchestrator | Saturday 31 May 2025 20:58:32 +0000 (0:00:00.986) 0:02:25.790 ********** 2025-05-31 20:58:33.750383 | orchestrator | ok: [testbed-node-0] 2025-05-31 20:58:33.750393 | orchestrator | ok: [testbed-node-1] 2025-05-31 20:58:33.750402 | orchestrator | ok: [testbed-node-2] 2025-05-31 20:58:33.750412 | orchestrator | 2025-05-31 20:58:33.750421 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 20:58:33.750432 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-05-31 20:58:33.750442 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-05-31 20:58:33.750461 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-05-31 20:58:33.750477 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 20:58:33.750488 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 20:58:33.750497 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 20:58:33.750507 | orchestrator | 2025-05-31 20:58:33.750516 | orchestrator | 2025-05-31 20:58:33.750526 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-31 20:58:33.750536 | orchestrator | Saturday 31 May 2025 20:58:33 +0000 (0:00:00.834) 0:02:26.625 ********** 2025-05-31 20:58:33.750545 | orchestrator | =============================================================================== 2025-05-31 20:58:33.750555 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 38.62s 2025-05-31 20:58:33.750564 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 18.30s 2025-05-31 20:58:33.750573 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 13.68s 2025-05-31 20:58:33.750583 | orchestrator | ovn-db : Restart ovn-sb-db container ------------------------------------ 8.92s 2025-05-31 20:58:33.750592 | orchestrator | ovn-db : Restart ovn-nb-db container ------------------------------------ 8.69s 2025-05-31 20:58:33.750602 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.97s 2025-05-31 20:58:33.750611 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.93s 2025-05-31 20:58:33.750629 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.95s 2025-05-31 20:58:33.750638 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.55s 2025-05-31 20:58:33.750648 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 2.27s 2025-05-31 20:58:33.750657 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.24s 2025-05-31 20:58:33.750667 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 2.11s 2025-05-31 20:58:33.750676 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 2.04s 2025-05-31 20:58:33.750685 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.91s 2025-05-31 20:58:33.750695 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.69s 2025-05-31 20:58:33.750704 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.53s 2025-05-31 20:58:33.750714 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.47s 2025-05-31 20:58:33.750723 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.45s 2025-05-31 20:58:33.750732 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.30s 2025-05-31 20:58:33.750742 | orchestrator | ovn-db : Wait for ovn-sb-db --------------------------------------------- 1.24s 2025-05-31 20:58:33.750754 | orchestrator | 2025-05-31 20:58:33 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:58:33.750771 | orchestrator | 2025-05-31 20:58:33 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:58:36.788433 | orchestrator | 2025-05-31 20:58:36 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:58:36.788802 | orchestrator | 2025-05-31 20:58:36 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:58:36.789072 | orchestrator | 2025-05-31 20:58:36 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:58:39.848278 | orchestrator | 2025-05-31 20:58:39 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:58:39.849988 | orchestrator | 2025-05-31 20:58:39 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:58:39.850607 | orchestrator | 2025-05-31 20:58:39 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:58:42.897821 | orchestrator | 2025-05-31 20:58:42 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:58:42.899019 | orchestrator | 2025-05-31 20:58:42 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:58:42.899275 | orchestrator | 2025-05-31 20:58:42 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:58:45.950778 | orchestrator | 2025-05-31 20:58:45 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:58:45.953307 | orchestrator | 2025-05-31 20:58:45 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:58:45.953583 | orchestrator | 2025-05-31 20:58:45 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:58:49.002361 | orchestrator | 2025-05-31 20:58:48 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:58:49.003663 | orchestrator | 2025-05-31 20:58:48 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:58:49.005637 | orchestrator | 2025-05-31 20:58:48 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:58:52.054556 | orchestrator | 2025-05-31 20:58:52 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:58:52.055713 | orchestrator | 2025-05-31 20:58:52 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:58:52.055743 | orchestrator | 2025-05-31 20:58:52 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:58:55.099004 | orchestrator | 2025-05-31 20:58:55 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:58:55.099654 | orchestrator | 2025-05-31 20:58:55 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:58:55.099887 | orchestrator | 2025-05-31 20:58:55 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:58:58.135953 | orchestrator | 2025-05-31 20:58:58 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:58:58.137845 | orchestrator | 2025-05-31 20:58:58 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:58:58.137990 | orchestrator | 2025-05-31 20:58:58 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:59:01.181150 | orchestrator | 2025-05-31 20:59:01 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:59:01.181350 | orchestrator | 2025-05-31 20:59:01 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:59:01.181373 | orchestrator | 2025-05-31 20:59:01 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:59:04.232264 | orchestrator | 2025-05-31 20:59:04 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:59:04.232360 | orchestrator | 2025-05-31 20:59:04 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:59:04.232375 | orchestrator | 2025-05-31 20:59:04 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:59:07.277394 | orchestrator | 2025-05-31 20:59:07 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:59:07.278258 | orchestrator | 2025-05-31 20:59:07 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:59:07.278354 | orchestrator | 2025-05-31 20:59:07 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:59:10.319324 | orchestrator | 2025-05-31 20:59:10 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:59:10.322411 | orchestrator | 2025-05-31 20:59:10 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:59:10.322624 | orchestrator | 2025-05-31 20:59:10 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:59:13.362449 | orchestrator | 2025-05-31 20:59:13 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:59:13.364718 | orchestrator | 2025-05-31 20:59:13 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:59:13.364757 | orchestrator | 2025-05-31 20:59:13 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:59:16.413568 | orchestrator | 2025-05-31 20:59:16 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:59:16.413827 | orchestrator | 2025-05-31 20:59:16 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:59:16.414176 | orchestrator | 2025-05-31 20:59:16 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:59:19.457904 | orchestrator | 2025-05-31 20:59:19 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:59:19.463459 | orchestrator | 2025-05-31 20:59:19 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:59:19.463494 | orchestrator | 2025-05-31 20:59:19 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:59:22.511382 | orchestrator | 2025-05-31 20:59:22 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:59:22.516346 | orchestrator | 2025-05-31 20:59:22 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:59:22.516537 | orchestrator | 2025-05-31 20:59:22 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:59:25.561978 | orchestrator | 2025-05-31 20:59:25 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:59:25.563249 | orchestrator | 2025-05-31 20:59:25 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:59:25.563377 | orchestrator | 2025-05-31 20:59:25 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:59:28.615126 | orchestrator | 2025-05-31 20:59:28 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:59:28.616644 | orchestrator | 2025-05-31 20:59:28 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:59:28.616679 | orchestrator | 2025-05-31 20:59:28 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:59:31.667916 | orchestrator | 2025-05-31 20:59:31 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:59:31.669193 | orchestrator | 2025-05-31 20:59:31 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:59:31.669393 | orchestrator | 2025-05-31 20:59:31 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:59:34.725983 | orchestrator | 2025-05-31 20:59:34 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:59:34.726495 | orchestrator | 2025-05-31 20:59:34 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:59:34.726528 | orchestrator | 2025-05-31 20:59:34 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:59:37.780531 | orchestrator | 2025-05-31 20:59:37 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:59:37.780676 | orchestrator | 2025-05-31 20:59:37 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:59:37.780694 | orchestrator | 2025-05-31 20:59:37 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:59:40.820754 | orchestrator | 2025-05-31 20:59:40 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:59:40.822124 | orchestrator | 2025-05-31 20:59:40 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:59:40.822480 | orchestrator | 2025-05-31 20:59:40 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:59:43.868729 | orchestrator | 2025-05-31 20:59:43 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:59:43.869666 | orchestrator | 2025-05-31 20:59:43 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:59:43.869904 | orchestrator | 2025-05-31 20:59:43 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:59:46.906532 | orchestrator | 2025-05-31 20:59:46 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:59:46.906984 | orchestrator | 2025-05-31 20:59:46 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:59:46.907018 | orchestrator | 2025-05-31 20:59:46 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:59:49.949923 | orchestrator | 2025-05-31 20:59:49 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:59:49.951541 | orchestrator | 2025-05-31 20:59:49 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:59:49.951573 | orchestrator | 2025-05-31 20:59:49 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:59:52.998274 | orchestrator | 2025-05-31 20:59:52 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:59:52.999738 | orchestrator | 2025-05-31 20:59:52 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:59:52.999799 | orchestrator | 2025-05-31 20:59:52 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:59:56.039267 | orchestrator | 2025-05-31 20:59:56 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:59:56.043657 | orchestrator | 2025-05-31 20:59:56 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:59:56.043721 | orchestrator | 2025-05-31 20:59:56 | INFO  | Wait 1 second(s) until the next check 2025-05-31 20:59:59.092851 | orchestrator | 2025-05-31 20:59:59 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 20:59:59.094504 | orchestrator | 2025-05-31 20:59:59 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 20:59:59.094551 | orchestrator | 2025-05-31 20:59:59 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:00:02.141255 | orchestrator | 2025-05-31 21:00:02 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 21:00:02.144039 | orchestrator | 2025-05-31 21:00:02 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 21:00:02.144088 | orchestrator | 2025-05-31 21:00:02 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:00:05.208543 | orchestrator | 2025-05-31 21:00:05 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 21:00:05.209182 | orchestrator | 2025-05-31 21:00:05 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 21:00:05.209214 | orchestrator | 2025-05-31 21:00:05 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:00:08.265044 | orchestrator | 2025-05-31 21:00:08 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 21:00:08.265152 | orchestrator | 2025-05-31 21:00:08 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 21:00:08.265168 | orchestrator | 2025-05-31 21:00:08 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:00:11.307106 | orchestrator | 2025-05-31 21:00:11 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 21:00:11.307697 | orchestrator | 2025-05-31 21:00:11 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 21:00:11.307730 | orchestrator | 2025-05-31 21:00:11 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:00:14.364529 | orchestrator | 2025-05-31 21:00:14 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 21:00:14.366653 | orchestrator | 2025-05-31 21:00:14 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 21:00:14.366694 | orchestrator | 2025-05-31 21:00:14 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:00:17.422143 | orchestrator | 2025-05-31 21:00:17 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 21:00:17.422230 | orchestrator | 2025-05-31 21:00:17 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 21:00:17.422240 | orchestrator | 2025-05-31 21:00:17 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:00:20.465841 | orchestrator | 2025-05-31 21:00:20 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 21:00:20.468636 | orchestrator | 2025-05-31 21:00:20 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 21:00:20.469132 | orchestrator | 2025-05-31 21:00:20 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:00:23.521377 | orchestrator | 2025-05-31 21:00:23 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 21:00:23.521501 | orchestrator | 2025-05-31 21:00:23 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 21:00:23.521527 | orchestrator | 2025-05-31 21:00:23 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:00:26.559017 | orchestrator | 2025-05-31 21:00:26 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 21:00:26.559381 | orchestrator | 2025-05-31 21:00:26 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 21:00:26.559426 | orchestrator | 2025-05-31 21:00:26 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:00:29.610508 | orchestrator | 2025-05-31 21:00:29 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 21:00:29.610823 | orchestrator | 2025-05-31 21:00:29 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 21:00:29.610849 | orchestrator | 2025-05-31 21:00:29 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:00:32.660559 | orchestrator | 2025-05-31 21:00:32 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 21:00:32.661844 | orchestrator | 2025-05-31 21:00:32 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 21:00:32.663561 | orchestrator | 2025-05-31 21:00:32 | INFO  | Task 1d8c3d1e-9209-4c30-8a9b-fc3899ee53e6 is in state STARTED 2025-05-31 21:00:32.663690 | orchestrator | 2025-05-31 21:00:32 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:00:35.713451 | orchestrator | 2025-05-31 21:00:35 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 21:00:35.714911 | orchestrator | 2025-05-31 21:00:35 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 21:00:35.716521 | orchestrator | 2025-05-31 21:00:35 | INFO  | Task 1d8c3d1e-9209-4c30-8a9b-fc3899ee53e6 is in state STARTED 2025-05-31 21:00:35.716613 | orchestrator | 2025-05-31 21:00:35 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:00:38.760997 | orchestrator | 2025-05-31 21:00:38 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 21:00:38.761983 | orchestrator | 2025-05-31 21:00:38 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 21:00:38.763212 | orchestrator | 2025-05-31 21:00:38 | INFO  | Task 1d8c3d1e-9209-4c30-8a9b-fc3899ee53e6 is in state STARTED 2025-05-31 21:00:38.763302 | orchestrator | 2025-05-31 21:00:38 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:00:41.805430 | orchestrator | 2025-05-31 21:00:41 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 21:00:41.806681 | orchestrator | 2025-05-31 21:00:41 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 21:00:41.808862 | orchestrator | 2025-05-31 21:00:41 | INFO  | Task 1d8c3d1e-9209-4c30-8a9b-fc3899ee53e6 is in state STARTED 2025-05-31 21:00:41.808953 | orchestrator | 2025-05-31 21:00:41 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:00:44.868938 | orchestrator | 2025-05-31 21:00:44 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 21:00:44.869236 | orchestrator | 2025-05-31 21:00:44 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 21:00:44.872537 | orchestrator | 2025-05-31 21:00:44 | INFO  | Task 1d8c3d1e-9209-4c30-8a9b-fc3899ee53e6 is in state STARTED 2025-05-31 21:00:44.872576 | orchestrator | 2025-05-31 21:00:44 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:00:47.920519 | orchestrator | 2025-05-31 21:00:47 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 21:00:47.920593 | orchestrator | 2025-05-31 21:00:47 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 21:00:47.920599 | orchestrator | 2025-05-31 21:00:47 | INFO  | Task 1d8c3d1e-9209-4c30-8a9b-fc3899ee53e6 is in state SUCCESS 2025-05-31 21:00:47.920604 | orchestrator | 2025-05-31 21:00:47 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:00:50.961198 | orchestrator | 2025-05-31 21:00:50 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 21:00:50.963841 | orchestrator | 2025-05-31 21:00:50 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 21:00:50.963927 | orchestrator | 2025-05-31 21:00:50 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:00:54.011063 | orchestrator | 2025-05-31 21:00:54 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 21:00:54.011774 | orchestrator | 2025-05-31 21:00:54 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 21:00:54.011809 | orchestrator | 2025-05-31 21:00:54 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:00:57.066434 | orchestrator | 2025-05-31 21:00:57 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 21:00:57.067270 | orchestrator | 2025-05-31 21:00:57 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 21:00:57.067363 | orchestrator | 2025-05-31 21:00:57 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:01:00.120134 | orchestrator | 2025-05-31 21:01:00 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 21:01:00.121447 | orchestrator | 2025-05-31 21:01:00 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 21:01:00.121483 | orchestrator | 2025-05-31 21:01:00 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:01:03.157988 | orchestrator | 2025-05-31 21:01:03 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 21:01:03.158285 | orchestrator | 2025-05-31 21:01:03 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 21:01:03.158311 | orchestrator | 2025-05-31 21:01:03 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:01:06.202162 | orchestrator | 2025-05-31 21:01:06 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 21:01:06.202707 | orchestrator | 2025-05-31 21:01:06 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 21:01:06.202755 | orchestrator | 2025-05-31 21:01:06 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:01:09.248913 | orchestrator | 2025-05-31 21:01:09 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 21:01:09.250005 | orchestrator | 2025-05-31 21:01:09 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 21:01:09.250206 | orchestrator | 2025-05-31 21:01:09 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:01:12.297503 | orchestrator | 2025-05-31 21:01:12 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 21:01:12.297615 | orchestrator | 2025-05-31 21:01:12 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 21:01:12.299768 | orchestrator | 2025-05-31 21:01:12 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:01:15.340472 | orchestrator | 2025-05-31 21:01:15 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 21:01:15.340695 | orchestrator | 2025-05-31 21:01:15 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 21:01:15.340724 | orchestrator | 2025-05-31 21:01:15 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:01:18.394103 | orchestrator | 2025-05-31 21:01:18 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state STARTED 2025-05-31 21:01:18.394201 | orchestrator | 2025-05-31 21:01:18 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 21:01:18.394214 | orchestrator | 2025-05-31 21:01:18 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:01:21.466763 | orchestrator | 2025-05-31 21:01:21 | INFO  | Task fae633a2-e69d-4ae6-90b8-348d3b4747ca is in state SUCCESS 2025-05-31 21:01:21.467900 | orchestrator | 2025-05-31 21:01:21.467961 | orchestrator | None 2025-05-31 21:01:21.469888 | orchestrator | 2025-05-31 21:01:21.469928 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-31 21:01:21.469941 | orchestrator | 2025-05-31 21:01:21.469953 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-31 21:01:21.469964 | orchestrator | Saturday 31 May 2025 20:54:57 +0000 (0:00:00.315) 0:00:00.315 ********** 2025-05-31 21:01:21.469975 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:01:21.469988 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:01:21.469999 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:01:21.470010 | orchestrator | 2025-05-31 21:01:21.470082 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-31 21:01:21.470095 | orchestrator | Saturday 31 May 2025 20:54:57 +0000 (0:00:00.461) 0:00:00.776 ********** 2025-05-31 21:01:21.470107 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-05-31 21:01:21.470118 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-05-31 21:01:21.470156 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-05-31 21:01:21.470168 | orchestrator | 2025-05-31 21:01:21.470179 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-05-31 21:01:21.470189 | orchestrator | 2025-05-31 21:01:21.470200 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-05-31 21:01:21.470211 | orchestrator | Saturday 31 May 2025 20:54:58 +0000 (0:00:00.611) 0:00:01.388 ********** 2025-05-31 21:01:21.470222 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 21:01:21.470233 | orchestrator | 2025-05-31 21:01:21.470244 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-05-31 21:01:21.470255 | orchestrator | Saturday 31 May 2025 20:54:59 +0000 (0:00:01.095) 0:00:02.484 ********** 2025-05-31 21:01:21.470265 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:01:21.470276 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:01:21.470287 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:01:21.470297 | orchestrator | 2025-05-31 21:01:21.470321 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-05-31 21:01:21.470332 | orchestrator | Saturday 31 May 2025 20:55:00 +0000 (0:00:00.939) 0:00:03.424 ********** 2025-05-31 21:01:21.470343 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 21:01:21.470354 | orchestrator | 2025-05-31 21:01:21.470365 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-05-31 21:01:21.470375 | orchestrator | Saturday 31 May 2025 20:55:01 +0000 (0:00:01.370) 0:00:04.794 ********** 2025-05-31 21:01:21.470386 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:01:21.470397 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:01:21.470407 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:01:21.470418 | orchestrator | 2025-05-31 21:01:21.470429 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-05-31 21:01:21.470439 | orchestrator | Saturday 31 May 2025 20:55:02 +0000 (0:00:01.119) 0:00:05.916 ********** 2025-05-31 21:01:21.470450 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-05-31 21:01:21.470462 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-05-31 21:01:21.470474 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-05-31 21:01:21.470487 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-05-31 21:01:21.470500 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-05-31 21:01:21.470512 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-05-31 21:01:21.470525 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-05-31 21:01:21.470538 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-05-31 21:01:21.470551 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-05-31 21:01:21.470564 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-05-31 21:01:21.470577 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-05-31 21:01:21.470589 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-05-31 21:01:21.470602 | orchestrator | 2025-05-31 21:01:21.470629 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-05-31 21:01:21.470935 | orchestrator | Saturday 31 May 2025 20:55:05 +0000 (0:00:02.679) 0:00:08.595 ********** 2025-05-31 21:01:21.470961 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-05-31 21:01:21.470985 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-05-31 21:01:21.471007 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-05-31 21:01:21.471018 | orchestrator | 2025-05-31 21:01:21.471029 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-05-31 21:01:21.471040 | orchestrator | Saturday 31 May 2025 20:55:06 +0000 (0:00:01.030) 0:00:09.626 ********** 2025-05-31 21:01:21.471050 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-05-31 21:01:21.471061 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-05-31 21:01:21.471072 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-05-31 21:01:21.471082 | orchestrator | 2025-05-31 21:01:21.471093 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-05-31 21:01:21.471104 | orchestrator | Saturday 31 May 2025 20:55:08 +0000 (0:00:01.583) 0:00:11.209 ********** 2025-05-31 21:01:21.471115 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-05-31 21:01:21.471126 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:01:21.471180 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-05-31 21:01:21.471195 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:01:21.471219 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-05-31 21:01:21.471231 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:01:21.471242 | orchestrator | 2025-05-31 21:01:21.471253 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-05-31 21:01:21.471263 | orchestrator | Saturday 31 May 2025 20:55:09 +0000 (0:00:00.859) 0:00:12.069 ********** 2025-05-31 21:01:21.471278 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-31 21:01:21.471296 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-31 21:01:21.471309 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-31 21:01:21.471321 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-31 21:01:21.471347 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-31 21:01:21.471366 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-31 21:01:21.471379 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-31 21:01:21.471391 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-31 21:01:21.471402 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-31 21:01:21.471413 | orchestrator | 2025-05-31 21:01:21.471424 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-05-31 21:01:21.471497 | orchestrator | Saturday 31 May 2025 20:55:11 +0000 (0:00:02.482) 0:00:14.552 ********** 2025-05-31 21:01:21.471509 | orchestrator | changed: [testbed-node-1] 2025-05-31 21:01:21.471520 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:01:21.471531 | orchestrator | changed: [testbed-node-2] 2025-05-31 21:01:21.471542 | orchestrator | 2025-05-31 21:01:21.471553 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-05-31 21:01:21.471563 | orchestrator | Saturday 31 May 2025 20:55:13 +0000 (0:00:02.101) 0:00:16.653 ********** 2025-05-31 21:01:21.471574 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-05-31 21:01:21.471585 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-05-31 21:01:21.471609 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-05-31 21:01:21.471620 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-05-31 21:01:21.471645 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-05-31 21:01:21.471663 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-05-31 21:01:21.471674 | orchestrator | 2025-05-31 21:01:21.471685 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-05-31 21:01:21.471696 | orchestrator | Saturday 31 May 2025 20:55:15 +0000 (0:00:01.973) 0:00:18.627 ********** 2025-05-31 21:01:21.471836 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:01:21.471849 | orchestrator | changed: [testbed-node-1] 2025-05-31 21:01:21.471936 | orchestrator | changed: [testbed-node-2] 2025-05-31 21:01:21.471949 | orchestrator | 2025-05-31 21:01:21.471989 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-05-31 21:01:21.472001 | orchestrator | Saturday 31 May 2025 20:55:17 +0000 (0:00:02.267) 0:00:20.896 ********** 2025-05-31 21:01:21.472012 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:01:21.472022 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:01:21.472033 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:01:21.472055 | orchestrator | 2025-05-31 21:01:21.472099 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-05-31 21:01:21.472121 | orchestrator | Saturday 31 May 2025 20:55:19 +0000 (0:00:01.465) 0:00:22.361 ********** 2025-05-31 21:01:21.472150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-31 21:01:21.472224 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-31 21:01:21.472237 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-31 21:01:21.472250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__c8149032ae03954cadf8873fde2f384c56425782', '__omit_place_holder__c8149032ae03954cadf8873fde2f384c56425782'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-31 21:01:21.472261 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:01:21.472273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-31 21:01:21.472295 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-31 21:01:21.472306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-31 21:01:21.472322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__c8149032ae03954cadf8873fde2f384c56425782', '__omit_place_holder__c8149032ae03954cadf8873fde2f384c56425782'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-31 21:01:21.472334 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:01:21.472354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-31 21:01:21.472365 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-31 21:01:21.472375 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-31 21:01:21.472392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__c8149032ae03954cadf8873fde2f384c56425782', '__omit_place_holder__c8149032ae03954cadf8873fde2f384c56425782'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-31 21:01:21.472402 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:01:21.472412 | orchestrator | 2025-05-31 21:01:21.472421 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-05-31 21:01:21.472431 | orchestrator | Saturday 31 May 2025 20:55:20 +0000 (0:00:00.798) 0:00:23.160 ********** 2025-05-31 21:01:21.472441 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-31 21:01:21.472456 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-31 21:01:21.472474 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-31 21:01:21.472485 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-31 21:01:21.472495 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-31 21:01:21.472511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__c8149032ae03954cadf8873fde2f384c56425782', '__omit_place_holder__c8149032ae03954cadf8873fde2f384c56425782'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-31 21:01:21.472522 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-31 21:01:21.472536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-31 21:01:21.472546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__c8149032ae03954cadf8873fde2f384c56425782', '__omit_place_holder__c8149032ae03954cadf8873fde2f384c56425782'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-31 21:01:21.472563 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-31 21:01:21.472573 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-31 21:01:21.472593 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__c8149032ae03954cadf8873fde2f384c56425782', '__omit_place_holder__c8149032ae03954cadf8873fde2f384c56425782'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-31 21:01:21.472603 | orchestrator | 2025-05-31 21:01:21.472613 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-05-31 21:01:21.472623 | orchestrator | Saturday 31 May 2025 20:55:25 +0000 (0:00:05.395) 0:00:28.555 ********** 2025-05-31 21:01:21.472633 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-31 21:01:21.472643 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-31 21:01:21.472663 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-31 21:01:21.472681 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-31 21:01:21.472692 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-31 21:01:21.472707 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-31 21:01:21.472917 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-31 21:01:21.472933 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-31 21:01:21.472955 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-31 21:01:21.472965 | orchestrator | 2025-05-31 21:01:21.472987 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-05-31 21:01:21.473003 | orchestrator | Saturday 31 May 2025 20:55:29 +0000 (0:00:03.741) 0:00:32.297 ********** 2025-05-31 21:01:21.473013 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-05-31 21:01:21.473023 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-05-31 21:01:21.473033 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-05-31 21:01:21.473042 | orchestrator | 2025-05-31 21:01:21.473052 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-05-31 21:01:21.473062 | orchestrator | Saturday 31 May 2025 20:55:31 +0000 (0:00:02.562) 0:00:34.859 ********** 2025-05-31 21:01:21.473071 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-05-31 21:01:21.473081 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-05-31 21:01:21.473090 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-05-31 21:01:21.473100 | orchestrator | 2025-05-31 21:01:21.474926 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-05-31 21:01:21.474962 | orchestrator | Saturday 31 May 2025 20:55:36 +0000 (0:00:04.466) 0:00:39.326 ********** 2025-05-31 21:01:21.474984 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:01:21.474994 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:01:21.475004 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:01:21.475013 | orchestrator | 2025-05-31 21:01:21.475023 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-05-31 21:01:21.475033 | orchestrator | Saturday 31 May 2025 20:55:37 +0000 (0:00:00.694) 0:00:40.020 ********** 2025-05-31 21:01:21.475061 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-05-31 21:01:21.475074 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-05-31 21:01:21.475084 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-05-31 21:01:21.475093 | orchestrator | 2025-05-31 21:01:21.475150 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-05-31 21:01:21.475161 | orchestrator | Saturday 31 May 2025 20:55:39 +0000 (0:00:02.413) 0:00:42.433 ********** 2025-05-31 21:01:21.475171 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-05-31 21:01:21.475180 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-05-31 21:01:21.475347 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-05-31 21:01:21.475361 | orchestrator | 2025-05-31 21:01:21.475370 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-05-31 21:01:21.475380 | orchestrator | Saturday 31 May 2025 20:55:42 +0000 (0:00:03.063) 0:00:45.496 ********** 2025-05-31 21:01:21.475390 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-05-31 21:01:21.475400 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-05-31 21:01:21.475410 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-05-31 21:01:21.475420 | orchestrator | 2025-05-31 21:01:21.475429 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-05-31 21:01:21.475438 | orchestrator | Saturday 31 May 2025 20:55:44 +0000 (0:00:01.828) 0:00:47.325 ********** 2025-05-31 21:01:21.475448 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-05-31 21:01:21.475457 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-05-31 21:01:21.475467 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-05-31 21:01:21.475480 | orchestrator | 2025-05-31 21:01:21.475491 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-05-31 21:01:21.475502 | orchestrator | Saturday 31 May 2025 20:55:45 +0000 (0:00:01.614) 0:00:48.939 ********** 2025-05-31 21:01:21.475513 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 21:01:21.475524 | orchestrator | 2025-05-31 21:01:21.475535 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-05-31 21:01:21.475546 | orchestrator | Saturday 31 May 2025 20:55:46 +0000 (0:00:01.035) 0:00:49.975 ********** 2025-05-31 21:01:21.475559 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-31 21:01:21.475572 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-31 21:01:21.475601 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-31 21:01:21.475614 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-31 21:01:21.475626 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-31 21:01:21.475637 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-31 21:01:21.475719 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-31 21:01:21.475799 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-31 21:01:21.475849 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-31 21:01:21.475924 | orchestrator | 2025-05-31 21:01:21.475937 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-05-31 21:01:21.475968 | orchestrator | Saturday 31 May 2025 20:55:50 +0000 (0:00:03.421) 0:00:53.396 ********** 2025-05-31 21:01:21.476000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-31 21:01:21.476011 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-31 21:01:21.476021 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-31 21:01:21.476032 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:01:21.476042 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-31 21:01:21.476053 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-31 21:01:21.476075 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-31 21:01:21.476227 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:01:21.476251 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-31 21:01:21.476303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-31 21:01:21.476321 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-31 21:01:21.476337 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:01:21.476356 | orchestrator | 2025-05-31 21:01:21.476372 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-05-31 21:01:21.476388 | orchestrator | Saturday 31 May 2025 20:55:51 +0000 (0:00:00.685) 0:00:54.082 ********** 2025-05-31 21:01:21.476425 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-31 21:01:21.476438 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-31 21:01:21.476468 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-31 21:01:21.476479 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:01:21.476529 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-31 21:01:21.476601 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-31 21:01:21.476643 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-31 21:01:21.476653 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:01:21.476693 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-31 21:01:21.476705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-31 21:01:21.476716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-31 21:01:21.476730 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:01:21.476740 | orchestrator | 2025-05-31 21:01:21.476750 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-05-31 21:01:21.476759 | orchestrator | Saturday 31 May 2025 20:55:52 +0000 (0:00:01.191) 0:00:55.273 ********** 2025-05-31 21:01:21.476774 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-31 21:01:21.476816 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-31 21:01:21.476827 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-31 21:01:21.476837 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:01:21.476847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-31 21:01:21.477012 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-31 21:01:21.477035 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-31 21:01:21.477056 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-31 21:01:21.477104 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-31 21:01:21.477128 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:01:21.477145 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-31 21:01:21.477156 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:01:21.477165 | orchestrator | 2025-05-31 21:01:21.477176 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-05-31 21:01:21.477185 | orchestrator | Saturday 31 May 2025 20:55:53 +0000 (0:00:00.731) 0:00:56.004 ********** 2025-05-31 21:01:21.477195 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-31 21:01:21.477206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-31 21:01:21.477222 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-31 21:01:21.477232 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:01:21.477242 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-31 21:01:21.477256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-31 21:01:21.477267 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-31 21:01:21.477312 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-31 21:01:21.477334 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-31 21:01:21.477344 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:01:21.477354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-31 21:01:21.477371 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:01:21.477378 | orchestrator | 2025-05-31 21:01:21.477386 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-05-31 21:01:21.477394 | orchestrator | Saturday 31 May 2025 20:55:53 +0000 (0:00:00.636) 0:00:56.641 ********** 2025-05-31 21:01:21.477422 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-31 21:01:21.477432 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-31 21:01:21.477448 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-31 21:01:21.477456 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:01:21.477479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-31 21:01:21.477488 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-31 21:01:21.477496 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-31 21:01:21.477510 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:01:21.477518 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-31 21:01:21.477526 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-31 21:01:21.477535 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-31 21:01:21.477543 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:01:21.477551 | orchestrator | 2025-05-31 21:01:21.477562 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2025-05-31 21:01:21.477570 | orchestrator | Saturday 31 May 2025 20:55:55 +0000 (0:00:02.320) 0:00:58.961 ********** 2025-05-31 21:01:21.477579 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-31 21:01:21.477592 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-31 21:01:21.477601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-31 21:01:21.477614 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:01:21.477622 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-31 21:01:21.477630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-31 21:01:21.477639 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-31 21:01:21.477647 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:01:21.477658 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-31 21:01:21.477671 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-31 21:01:21.477680 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-31 21:01:21.477693 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:01:21.477701 | orchestrator | 2025-05-31 21:01:21.477709 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2025-05-31 21:01:21.477717 | orchestrator | Saturday 31 May 2025 20:55:56 +0000 (0:00:00.870) 0:00:59.832 ********** 2025-05-31 21:01:21.477725 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-31 21:01:21.477751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-31 21:01:21.477761 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-31 21:01:21.477769 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:01:21.477777 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-31 21:01:21.477789 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-31 21:01:21.477804 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-31 21:01:21.477882 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:01:21.477892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-31 21:01:21.477901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-31 21:01:21.478118 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-31 21:01:21.478132 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:01:21.478140 | orchestrator | 2025-05-31 21:01:21.478148 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2025-05-31 21:01:21.478157 | orchestrator | Saturday 31 May 2025 20:55:57 +0000 (0:00:00.478) 0:01:00.310 ********** 2025-05-31 21:01:21.478165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-31 21:01:21.478178 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-31 21:01:21.478187 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-31 21:01:21.478202 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:01:21.478217 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-31 21:01:21.478226 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-31 21:01:21.478234 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-31 21:01:21.478242 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:01:21.478250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-31 21:01:21.478258 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-31 21:01:21.478270 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-31 21:01:21.478279 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:01:21.478292 | orchestrator | 2025-05-31 21:01:21.478300 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-05-31 21:01:21.478308 | orchestrator | Saturday 31 May 2025 20:55:58 +0000 (0:00:01.040) 0:01:01.351 ********** 2025-05-31 21:01:21.478315 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-05-31 21:01:21.478324 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-05-31 21:01:21.478336 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-05-31 21:01:21.478344 | orchestrator | 2025-05-31 21:01:21.478352 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-05-31 21:01:21.478360 | orchestrator | Saturday 31 May 2025 20:55:59 +0000 (0:00:01.404) 0:01:02.756 ********** 2025-05-31 21:01:21.478368 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-05-31 21:01:21.478376 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-05-31 21:01:21.478384 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-05-31 21:01:21.478392 | orchestrator | 2025-05-31 21:01:21.478399 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-05-31 21:01:21.478436 | orchestrator | Saturday 31 May 2025 20:56:01 +0000 (0:00:01.500) 0:01:04.257 ********** 2025-05-31 21:01:21.478444 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-05-31 21:01:21.478452 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-05-31 21:01:21.478460 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-05-31 21:01:21.478468 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-31 21:01:21.478476 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:01:21.478483 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-31 21:01:21.478491 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:01:21.478509 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-31 21:01:21.478517 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:01:21.478525 | orchestrator | 2025-05-31 21:01:21.478533 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-05-31 21:01:21.478541 | orchestrator | Saturday 31 May 2025 20:56:02 +0000 (0:00:01.039) 0:01:05.297 ********** 2025-05-31 21:01:21.478549 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-31 21:01:21.478558 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-31 21:01:21.478576 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-31 21:01:21.478590 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-31 21:01:21.478599 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-31 21:01:21.478607 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-31 21:01:21.478616 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-31 21:01:21.478624 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-31 21:01:21.478632 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-31 21:01:21.478702 | orchestrator | 2025-05-31 21:01:21.478746 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-05-31 21:01:21.478764 | orchestrator | Saturday 31 May 2025 20:56:04 +0000 (0:00:02.684) 0:01:07.981 ********** 2025-05-31 21:01:21.478781 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 21:01:21.478797 | orchestrator | 2025-05-31 21:01:21.478806 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-05-31 21:01:21.478912 | orchestrator | Saturday 31 May 2025 20:56:05 +0000 (0:00:00.798) 0:01:08.779 ********** 2025-05-31 21:01:21.478926 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-05-31 21:01:21.478942 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-31 21:01:21.478975 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.478984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.479001 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-05-31 21:01:21.479017 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-31 21:01:21.479029 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.479044 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.479053 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-05-31 21:01:21.479061 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-31 21:01:21.479070 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.479083 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.479091 | orchestrator | 2025-05-31 21:01:21.479099 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-05-31 21:01:21.479107 | orchestrator | Saturday 31 May 2025 20:56:10 +0000 (0:00:04.261) 0:01:13.040 ********** 2025-05-31 21:01:21.479143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-05-31 21:01:21.479158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-31 21:01:21.479167 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.479185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.479218 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:01:21.479226 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-05-31 21:01:21.479241 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-05-31 21:01:21.479260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-31 21:01:21.479280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-31 21:01:21.479287 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.479328 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.479336 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.479356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.479364 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:01:21.479371 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:01:21.479377 | orchestrator | 2025-05-31 21:01:21.479394 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-05-31 21:01:21.479401 | orchestrator | Saturday 31 May 2025 20:56:11 +0000 (0:00:01.089) 0:01:14.129 ********** 2025-05-31 21:01:21.479409 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-05-31 21:01:21.479417 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-05-31 21:01:21.479424 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:01:21.479435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-05-31 21:01:21.479442 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-05-31 21:01:21.479449 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:01:21.479455 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-05-31 21:01:21.479462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-05-31 21:01:21.479469 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:01:21.479476 | orchestrator | 2025-05-31 21:01:21.479487 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-05-31 21:01:21.479494 | orchestrator | Saturday 31 May 2025 20:56:12 +0000 (0:00:01.764) 0:01:15.894 ********** 2025-05-31 21:01:21.479501 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:01:21.479507 | orchestrator | changed: [testbed-node-1] 2025-05-31 21:01:21.479514 | orchestrator | changed: [testbed-node-2] 2025-05-31 21:01:21.479520 | orchestrator | 2025-05-31 21:01:21.479527 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-05-31 21:01:21.479533 | orchestrator | Saturday 31 May 2025 20:56:14 +0000 (0:00:01.670) 0:01:17.564 ********** 2025-05-31 21:01:21.479540 | orchestrator | changed: [testbed-node-1] 2025-05-31 21:01:21.479547 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:01:21.479553 | orchestrator | changed: [testbed-node-2] 2025-05-31 21:01:21.479560 | orchestrator | 2025-05-31 21:01:21.479569 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-05-31 21:01:21.479580 | orchestrator | Saturday 31 May 2025 20:56:16 +0000 (0:00:01.980) 0:01:19.544 ********** 2025-05-31 21:01:21.479598 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 21:01:21.479609 | orchestrator | 2025-05-31 21:01:21.479618 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-05-31 21:01:21.479628 | orchestrator | Saturday 31 May 2025 20:56:17 +0000 (0:00:00.668) 0:01:20.213 ********** 2025-05-31 21:01:21.479639 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-31 21:01:21.479651 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.479663 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.479675 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-31 21:01:21.479693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.479757 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.479773 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-31 21:01:21.479780 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.479790 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.479798 | orchestrator | 2025-05-31 21:01:21.479804 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-05-31 21:01:21.479811 | orchestrator | Saturday 31 May 2025 20:56:21 +0000 (0:00:04.286) 0:01:24.500 ********** 2025-05-31 21:01:21.479824 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-31 21:01:21.479836 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.479844 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.479851 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:01:21.479878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-31 21:01:21.479885 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.479900 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-31 21:01:21.479913 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.479919 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:01:21.479927 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.479934 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.479941 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:01:21.479948 | orchestrator | 2025-05-31 21:01:21.479957 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-05-31 21:01:21.479968 | orchestrator | Saturday 31 May 2025 20:56:22 +0000 (0:00:00.695) 0:01:25.196 ********** 2025-05-31 21:01:21.479979 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-31 21:01:21.479998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-31 21:01:21.480010 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:01:21.480020 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-31 21:01:21.480031 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-31 21:01:21.480043 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:01:21.480061 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-31 21:01:21.480073 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-31 21:01:21.480081 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:01:21.480088 | orchestrator | 2025-05-31 21:01:21.480095 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-05-31 21:01:21.480108 | orchestrator | Saturday 31 May 2025 20:56:23 +0000 (0:00:01.601) 0:01:26.797 ********** 2025-05-31 21:01:21.480115 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:01:21.480121 | orchestrator | changed: [testbed-node-1] 2025-05-31 21:01:21.480128 | orchestrator | changed: [testbed-node-2] 2025-05-31 21:01:21.480134 | orchestrator | 2025-05-31 21:01:21.480141 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-05-31 21:01:21.480148 | orchestrator | Saturday 31 May 2025 20:56:25 +0000 (0:00:02.033) 0:01:28.831 ********** 2025-05-31 21:01:21.480154 | orchestrator | changed: [testbed-node-1] 2025-05-31 21:01:21.480161 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:01:21.480167 | orchestrator | changed: [testbed-node-2] 2025-05-31 21:01:21.480174 | orchestrator | 2025-05-31 21:01:21.480185 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-05-31 21:01:21.480192 | orchestrator | Saturday 31 May 2025 20:56:27 +0000 (0:00:01.849) 0:01:30.681 ********** 2025-05-31 21:01:21.480199 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:01:21.480205 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:01:21.480212 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:01:21.480230 | orchestrator | 2025-05-31 21:01:21.480237 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-05-31 21:01:21.480243 | orchestrator | Saturday 31 May 2025 20:56:27 +0000 (0:00:00.287) 0:01:30.968 ********** 2025-05-31 21:01:21.480250 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 21:01:21.480257 | orchestrator | 2025-05-31 21:01:21.480263 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-05-31 21:01:21.480270 | orchestrator | Saturday 31 May 2025 20:56:28 +0000 (0:00:00.731) 0:01:31.699 ********** 2025-05-31 21:01:21.480277 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-05-31 21:01:21.480286 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-05-31 21:01:21.480294 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-05-31 21:01:21.480306 | orchestrator | 2025-05-31 21:01:21.480316 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-05-31 21:01:21.480322 | orchestrator | Saturday 31 May 2025 20:56:32 +0000 (0:00:04.232) 0:01:35.932 ********** 2025-05-31 21:01:21.480334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-05-31 21:01:21.480341 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:01:21.480348 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-05-31 21:01:21.480355 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:01:21.480362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-05-31 21:01:21.480369 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:01:21.480377 | orchestrator | 2025-05-31 21:01:21.480388 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-05-31 21:01:21.480399 | orchestrator | Saturday 31 May 2025 20:56:34 +0000 (0:00:01.675) 0:01:37.607 ********** 2025-05-31 21:01:21.480412 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-31 21:01:21.480431 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-31 21:01:21.480442 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:01:21.480458 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-31 21:01:21.480470 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-31 21:01:21.480481 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:01:21.480498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-31 21:01:21.480510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-31 21:01:21.480522 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:01:21.480532 | orchestrator | 2025-05-31 21:01:21.480543 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-05-31 21:01:21.480554 | orchestrator | Saturday 31 May 2025 20:56:36 +0000 (0:00:01.940) 0:01:39.548 ********** 2025-05-31 21:01:21.480565 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:01:21.480575 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:01:21.480586 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:01:21.480596 | orchestrator | 2025-05-31 21:01:21.480607 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-05-31 21:01:21.480618 | orchestrator | Saturday 31 May 2025 20:56:37 +0000 (0:00:00.672) 0:01:40.220 ********** 2025-05-31 21:01:21.480629 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:01:21.480639 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:01:21.480650 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:01:21.480660 | orchestrator | 2025-05-31 21:01:21.480670 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-05-31 21:01:21.480681 | orchestrator | Saturday 31 May 2025 20:56:38 +0000 (0:00:00.824) 0:01:41.045 ********** 2025-05-31 21:01:21.480692 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 21:01:21.480703 | orchestrator | 2025-05-31 21:01:21.480714 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-05-31 21:01:21.480724 | orchestrator | Saturday 31 May 2025 20:56:38 +0000 (0:00:00.724) 0:01:41.769 ********** 2025-05-31 21:01:21.480735 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-31 21:01:21.480754 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.480771 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.480790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.480803 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-31 21:01:21.480815 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.480832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.480854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.480918 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-31 21:01:21.480926 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.480933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.480983 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.480997 | orchestrator | 2025-05-31 21:01:21.481008 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-05-31 21:01:21.481020 | orchestrator | Saturday 31 May 2025 20:56:44 +0000 (0:00:05.629) 0:01:47.398 ********** 2025-05-31 21:01:21.481036 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-31 21:01:21.481043 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.481054 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.481061 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.481072 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:01:21.481079 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-31 21:01:21.481085 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.481095 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.481106 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-31 21:01:21.481112 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.481123 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:01:21.481129 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.481135 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.481168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.481176 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:01:21.481182 | orchestrator | 2025-05-31 21:01:21.481188 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-05-31 21:01:21.481195 | orchestrator | Saturday 31 May 2025 20:56:45 +0000 (0:00:01.269) 0:01:48.668 ********** 2025-05-31 21:01:21.481206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-31 21:01:21.481222 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-31 21:01:21.481233 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:01:21.481242 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-31 21:01:21.481253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-31 21:01:21.481271 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:01:21.481282 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-31 21:01:21.481293 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-31 21:01:21.481304 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:01:21.481314 | orchestrator | 2025-05-31 21:01:21.481323 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-05-31 21:01:21.481330 | orchestrator | Saturday 31 May 2025 20:56:46 +0000 (0:00:00.980) 0:01:49.649 ********** 2025-05-31 21:01:21.481336 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:01:21.481342 | orchestrator | changed: [testbed-node-1] 2025-05-31 21:01:21.481348 | orchestrator | changed: [testbed-node-2] 2025-05-31 21:01:21.481354 | orchestrator | 2025-05-31 21:01:21.481360 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-05-31 21:01:21.481366 | orchestrator | Saturday 31 May 2025 20:56:48 +0000 (0:00:01.641) 0:01:51.291 ********** 2025-05-31 21:01:21.481372 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:01:21.481378 | orchestrator | changed: [testbed-node-1] 2025-05-31 21:01:21.481384 | orchestrator | changed: [testbed-node-2] 2025-05-31 21:01:21.481390 | orchestrator | 2025-05-31 21:01:21.481396 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-05-31 21:01:21.481402 | orchestrator | Saturday 31 May 2025 20:56:50 +0000 (0:00:02.245) 0:01:53.536 ********** 2025-05-31 21:01:21.481409 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:01:21.481415 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:01:21.481421 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:01:21.481427 | orchestrator | 2025-05-31 21:01:21.481433 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-05-31 21:01:21.481439 | orchestrator | Saturday 31 May 2025 20:56:51 +0000 (0:00:00.589) 0:01:54.125 ********** 2025-05-31 21:01:21.481445 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:01:21.481451 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:01:21.481457 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:01:21.481463 | orchestrator | 2025-05-31 21:01:21.481469 | orchestrator | TASK [include_role : designate] ************************************************ 2025-05-31 21:01:21.481475 | orchestrator | Saturday 31 May 2025 20:56:51 +0000 (0:00:00.320) 0:01:54.446 ********** 2025-05-31 21:01:21.481482 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 21:01:21.481488 | orchestrator | 2025-05-31 21:01:21.481494 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-05-31 21:01:21.481500 | orchestrator | Saturday 31 May 2025 20:56:52 +0000 (0:00:00.767) 0:01:55.213 ********** 2025-05-31 21:01:21.481511 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-31 21:01:21.481523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-31 21:01:21.481535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.481543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.481549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.481556 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-31 21:01:21.481567 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.481573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.481588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-31 21:01:21.481595 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.481602 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.481608 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.481615 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.481624 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.481638 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-31 21:01:21.481645 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-31 21:01:21.481652 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.481658 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.481665 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.481671 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.481687 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.481693 | orchestrator | 2025-05-31 21:01:21.481700 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-05-31 21:01:21.481706 | orchestrator | Saturday 31 May 2025 20:56:55 +0000 (0:00:03.749) 0:01:58.962 ********** 2025-05-31 21:01:21.481717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-31 21:01:21.481723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-31 21:01:21.481730 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.481736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.481743 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.481756 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.481766 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.481773 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:01:21.481779 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-31 21:01:21.481786 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-31 21:01:21.481792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.481799 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.481812 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.481822 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.481829 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-31 21:01:21.481835 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.481842 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-31 21:01:21.481848 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:01:21.481854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.481890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.481897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.481908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.481915 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.481921 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:01:21.481927 | orchestrator | 2025-05-31 21:01:21.481933 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-05-31 21:01:21.481939 | orchestrator | Saturday 31 May 2025 20:56:56 +0000 (0:00:00.816) 0:01:59.779 ********** 2025-05-31 21:01:21.481947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-05-31 21:01:21.481958 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-05-31 21:01:21.481969 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:01:21.481979 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-05-31 21:01:21.481990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-05-31 21:01:21.482007 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:01:21.482043 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-05-31 21:01:21.482050 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-05-31 21:01:21.482056 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:01:21.482062 | orchestrator | 2025-05-31 21:01:21.482069 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-05-31 21:01:21.482075 | orchestrator | Saturday 31 May 2025 20:56:57 +0000 (0:00:00.963) 0:02:00.742 ********** 2025-05-31 21:01:21.482081 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:01:21.482087 | orchestrator | changed: [testbed-node-1] 2025-05-31 21:01:21.482093 | orchestrator | changed: [testbed-node-2] 2025-05-31 21:01:21.482099 | orchestrator | 2025-05-31 21:01:21.482105 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-05-31 21:01:21.482111 | orchestrator | Saturday 31 May 2025 20:56:59 +0000 (0:00:01.706) 0:02:02.449 ********** 2025-05-31 21:01:21.482117 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:01:21.482124 | orchestrator | changed: [testbed-node-1] 2025-05-31 21:01:21.482130 | orchestrator | changed: [testbed-node-2] 2025-05-31 21:01:21.482136 | orchestrator | 2025-05-31 21:01:21.482142 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-05-31 21:01:21.482152 | orchestrator | Saturday 31 May 2025 20:57:01 +0000 (0:00:02.141) 0:02:04.590 ********** 2025-05-31 21:01:21.482158 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:01:21.482164 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:01:21.482170 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:01:21.482176 | orchestrator | 2025-05-31 21:01:21.482182 | orchestrator | TASK [include_role : glance] *************************************************** 2025-05-31 21:01:21.482188 | orchestrator | Saturday 31 May 2025 20:57:01 +0000 (0:00:00.391) 0:02:04.982 ********** 2025-05-31 21:01:21.482195 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 21:01:21.482201 | orchestrator | 2025-05-31 21:01:21.482207 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-05-31 21:01:21.482213 | orchestrator | Saturday 31 May 2025 20:57:02 +0000 (0:00:00.920) 0:02:05.902 ********** 2025-05-31 21:01:21.482228 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-31 21:01:21.482244 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-31 21:01:21.482590 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-31 21:01:21.482682 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-31 21:01:21.482744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-31 21:01:21.482762 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-31 21:01:21.482781 | orchestrator | 2025-05-31 21:01:21.482794 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-05-31 21:01:21.482806 | orchestrator | Saturday 31 May 2025 20:57:07 +0000 (0:00:04.165) 0:02:10.068 ********** 2025-05-31 21:01:21.482826 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-31 21:01:21.482922 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-31 21:01:21.482950 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:01:21.482968 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-31 21:01:21.482993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-31 21:01:21.483011 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:01:21.483027 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-31 21:01:21.483048 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-31 21:01:21.483066 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:01:21.483077 | orchestrator | 2025-05-31 21:01:21.483088 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-05-31 21:01:21.483099 | orchestrator | Saturday 31 May 2025 20:57:10 +0000 (0:00:02.933) 0:02:13.002 ********** 2025-05-31 21:01:21.483111 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-31 21:01:21.483124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-31 21:01:21.483136 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:01:21.483147 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-31 21:01:21.483163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-31 21:01:21.483175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-31 21:01:21.483186 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:01:21.483206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-31 21:01:21.483222 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:01:21.483233 | orchestrator | 2025-05-31 21:01:21.483244 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-05-31 21:01:21.483255 | orchestrator | Saturday 31 May 2025 20:57:13 +0000 (0:00:03.096) 0:02:16.098 ********** 2025-05-31 21:01:21.483266 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:01:21.483276 | orchestrator | changed: [testbed-node-1] 2025-05-31 21:01:21.483287 | orchestrator | changed: [testbed-node-2] 2025-05-31 21:01:21.483298 | orchestrator | 2025-05-31 21:01:21.483309 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-05-31 21:01:21.483320 | orchestrator | Saturday 31 May 2025 20:57:14 +0000 (0:00:01.591) 0:02:17.689 ********** 2025-05-31 21:01:21.483331 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:01:21.483341 | orchestrator | changed: [testbed-node-1] 2025-05-31 21:01:21.483352 | orchestrator | changed: [testbed-node-2] 2025-05-31 21:01:21.483363 | orchestrator | 2025-05-31 21:01:21.483373 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-05-31 21:01:21.483384 | orchestrator | Saturday 31 May 2025 20:57:16 +0000 (0:00:01.969) 0:02:19.658 ********** 2025-05-31 21:01:21.483395 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:01:21.483405 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:01:21.483416 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:01:21.483427 | orchestrator | 2025-05-31 21:01:21.483437 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-05-31 21:01:21.483448 | orchestrator | Saturday 31 May 2025 20:57:16 +0000 (0:00:00.303) 0:02:19.961 ********** 2025-05-31 21:01:21.483459 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 21:01:21.483469 | orchestrator | 2025-05-31 21:01:21.483480 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-05-31 21:01:21.483491 | orchestrator | Saturday 31 May 2025 20:57:17 +0000 (0:00:00.850) 0:02:20.812 ********** 2025-05-31 21:01:21.483502 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-31 21:01:21.483515 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-31 21:01:21.483531 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-31 21:01:21.483548 | orchestrator | 2025-05-31 21:01:21.483559 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-05-31 21:01:21.483570 | orchestrator | Saturday 31 May 2025 20:57:20 +0000 (0:00:03.076) 0:02:23.888 ********** 2025-05-31 21:01:21.483588 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-31 21:01:21.483600 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-31 21:01:21.483612 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:01:21.483623 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:01:21.483634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-31 21:01:21.483645 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:01:21.483656 | orchestrator | 2025-05-31 21:01:21.483667 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-05-31 21:01:21.483678 | orchestrator | Saturday 31 May 2025 20:57:21 +0000 (0:00:00.423) 0:02:24.311 ********** 2025-05-31 21:01:21.483689 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-05-31 21:01:21.483701 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-05-31 21:01:21.483713 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:01:21.483724 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-05-31 21:01:21.483734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-05-31 21:01:21.483745 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:01:21.483756 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-05-31 21:01:21.483776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-05-31 21:01:21.483787 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:01:21.483798 | orchestrator | 2025-05-31 21:01:21.483809 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-05-31 21:01:21.483820 | orchestrator | Saturday 31 May 2025 20:57:21 +0000 (0:00:00.655) 0:02:24.967 ********** 2025-05-31 21:01:21.483831 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:01:21.483842 | orchestrator | changed: [testbed-node-1] 2025-05-31 21:01:21.483852 | orchestrator | changed: [testbed-node-2] 2025-05-31 21:01:21.483881 | orchestrator | 2025-05-31 21:01:21.483892 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-05-31 21:01:21.483903 | orchestrator | Saturday 31 May 2025 20:57:23 +0000 (0:00:01.649) 0:02:26.617 ********** 2025-05-31 21:01:21.483914 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:01:21.483924 | orchestrator | changed: [testbed-node-1] 2025-05-31 21:01:21.483935 | orchestrator | changed: [testbed-node-2] 2025-05-31 21:01:21.483945 | orchestrator | 2025-05-31 21:01:21.483956 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-05-31 21:01:21.483967 | orchestrator | Saturday 31 May 2025 20:57:25 +0000 (0:00:02.171) 0:02:28.788 ********** 2025-05-31 21:01:21.483978 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:01:21.483988 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:01:21.484005 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:01:21.484016 | orchestrator | 2025-05-31 21:01:21.484027 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-05-31 21:01:21.484038 | orchestrator | Saturday 31 May 2025 20:57:26 +0000 (0:00:00.347) 0:02:29.135 ********** 2025-05-31 21:01:21.484049 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 21:01:21.484059 | orchestrator | 2025-05-31 21:01:21.484070 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-05-31 21:01:21.484081 | orchestrator | Saturday 31 May 2025 20:57:27 +0000 (0:00:00.899) 0:02:30.035 ********** 2025-05-31 21:01:21.484093 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-31 21:01:21.484127 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-31 21:01:21.484141 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-31 21:01:21.484164 | orchestrator | 2025-05-31 21:01:21.484175 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-05-31 21:01:21.484186 | orchestrator | Saturday 31 May 2025 20:57:31 +0000 (0:00:04.787) 0:02:34.822 ********** 2025-05-31 21:01:21.484210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-31 21:01:21.484223 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:01:21.484239 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-31 21:01:21.484341 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:01:21.484362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-31 21:01:21.484374 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:01:21.484385 | orchestrator | 2025-05-31 21:01:21.484396 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-05-31 21:01:21.484407 | orchestrator | Saturday 31 May 2025 20:57:32 +0000 (0:00:00.537) 0:02:35.359 ********** 2025-05-31 21:01:21.484419 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-31 21:01:21.484437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-31 21:01:21.484449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-31 21:01:21.484461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-31 21:01:21.484473 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-05-31 21:01:21.484485 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:01:21.484501 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-31 21:01:21.484513 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-31 21:01:21.484524 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-31 21:01:21.484542 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-31 21:01:21.484553 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-05-31 21:01:21.484564 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:01:21.484575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-31 21:01:21.484586 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-31 21:01:21.484597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-31 21:01:21.484616 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-31 21:01:21.484627 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-05-31 21:01:21.484637 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:01:21.484648 | orchestrator | 2025-05-31 21:01:21.484659 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-05-31 21:01:21.484670 | orchestrator | Saturday 31 May 2025 20:57:33 +0000 (0:00:01.274) 0:02:36.634 ********** 2025-05-31 21:01:21.484680 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:01:21.484691 | orchestrator | changed: [testbed-node-1] 2025-05-31 21:01:21.484702 | orchestrator | changed: [testbed-node-2] 2025-05-31 21:01:21.484713 | orchestrator | 2025-05-31 21:01:21.484723 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-05-31 21:01:21.484734 | orchestrator | Saturday 31 May 2025 20:57:35 +0000 (0:00:01.490) 0:02:38.125 ********** 2025-05-31 21:01:21.484745 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:01:21.484755 | orchestrator | changed: [testbed-node-1] 2025-05-31 21:01:21.484766 | orchestrator | changed: [testbed-node-2] 2025-05-31 21:01:21.484777 | orchestrator | 2025-05-31 21:01:21.484788 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-05-31 21:01:21.484799 | orchestrator | Saturday 31 May 2025 20:57:37 +0000 (0:00:01.875) 0:02:40.001 ********** 2025-05-31 21:01:21.484809 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:01:21.484820 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:01:21.484830 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:01:21.484841 | orchestrator | 2025-05-31 21:01:21.484851 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-05-31 21:01:21.484926 | orchestrator | Saturday 31 May 2025 20:57:37 +0000 (0:00:00.333) 0:02:40.335 ********** 2025-05-31 21:01:21.484939 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:01:21.484950 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:01:21.484961 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:01:21.484972 | orchestrator | 2025-05-31 21:01:21.484988 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-05-31 21:01:21.484999 | orchestrator | Saturday 31 May 2025 20:57:37 +0000 (0:00:00.260) 0:02:40.596 ********** 2025-05-31 21:01:21.485009 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 21:01:21.485020 | orchestrator | 2025-05-31 21:01:21.485031 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-05-31 21:01:21.485042 | orchestrator | Saturday 31 May 2025 20:57:38 +0000 (0:00:01.190) 0:02:41.786 ********** 2025-05-31 21:01:21.485063 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-31 21:01:21.485083 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-31 21:01:21.485097 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-31 21:01:21.485109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-31 21:01:21.485125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-31 21:01:21.485137 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-31 21:01:21.485156 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-31 21:01:21.485175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-31 21:01:21.485187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-31 21:01:21.485199 | orchestrator | 2025-05-31 21:01:21.485210 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-05-31 21:01:21.485221 | orchestrator | Saturday 31 May 2025 20:57:42 +0000 (0:00:03.989) 0:02:45.776 ********** 2025-05-31 21:01:21.485290 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-05-31 21:01:21.485307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-31 21:01:21.485327 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-31 21:01:21.485346 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:01:21.485358 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-05-31 21:01:21.485368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-31 21:01:21.485379 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-31 21:01:21.485388 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:01:21.485402 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-05-31 21:01:21.485419 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-31 21:01:21.485435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-31 21:01:21.485444 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:01:21.485454 | orchestrator | 2025-05-31 21:01:21.485463 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-05-31 21:01:21.485473 | orchestrator | Saturday 31 May 2025 20:57:43 +0000 (0:00:00.881) 0:02:46.657 ********** 2025-05-31 21:01:21.485483 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-05-31 21:01:21.485494 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-05-31 21:01:21.485504 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:01:21.485514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-05-31 21:01:21.485524 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-05-31 21:01:21.485534 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:01:21.485543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-05-31 21:01:21.485553 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-05-31 21:01:21.485563 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:01:21.485572 | orchestrator | 2025-05-31 21:01:21.485582 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-05-31 21:01:21.485591 | orchestrator | Saturday 31 May 2025 20:57:44 +0000 (0:00:01.168) 0:02:47.826 ********** 2025-05-31 21:01:21.485601 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:01:21.485610 | orchestrator | changed: [testbed-node-1] 2025-05-31 21:01:21.485619 | orchestrator | changed: [testbed-node-2] 2025-05-31 21:01:21.485629 | orchestrator | 2025-05-31 21:01:21.485638 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-05-31 21:01:21.485655 | orchestrator | Saturday 31 May 2025 20:57:46 +0000 (0:00:01.266) 0:02:49.093 ********** 2025-05-31 21:01:21.485671 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:01:21.485680 | orchestrator | changed: [testbed-node-1] 2025-05-31 21:01:21.485690 | orchestrator | changed: [testbed-node-2] 2025-05-31 21:01:21.485699 | orchestrator | 2025-05-31 21:01:21.485709 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-05-31 21:01:21.485718 | orchestrator | Saturday 31 May 2025 20:57:48 +0000 (0:00:02.110) 0:02:51.203 ********** 2025-05-31 21:01:21.485728 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:01:21.485737 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:01:21.485747 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:01:21.485756 | orchestrator | 2025-05-31 21:01:21.485766 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-05-31 21:01:21.485775 | orchestrator | Saturday 31 May 2025 20:57:48 +0000 (0:00:00.359) 0:02:51.562 ********** 2025-05-31 21:01:21.485785 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 21:01:21.485794 | orchestrator | 2025-05-31 21:01:21.485804 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-05-31 21:01:21.485814 | orchestrator | Saturday 31 May 2025 20:57:49 +0000 (0:00:01.252) 0:02:52.815 ********** 2025-05-31 21:01:21.486069 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-31 21:01:21.486093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.486105 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-31 21:01:21.486116 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.486141 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-31 21:01:21.486214 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.486228 | orchestrator | 2025-05-31 21:01:21.486238 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-05-31 21:01:21.486248 | orchestrator | Saturday 31 May 2025 20:57:52 +0000 (0:00:03.154) 0:02:55.969 ********** 2025-05-31 21:01:21.486259 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-31 21:01:21.486269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.486291 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:01:21.486306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-31 21:01:21.486381 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.486396 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:01:21.486407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-31 21:01:21.486418 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.486427 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:01:21.486437 | orchestrator | 2025-05-31 21:01:21.486447 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-05-31 21:01:21.486457 | orchestrator | Saturday 31 May 2025 20:57:53 +0000 (0:00:00.728) 0:02:56.698 ********** 2025-05-31 21:01:21.486467 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-05-31 21:01:21.486484 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-05-31 21:01:21.486494 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:01:21.486504 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-05-31 21:01:21.486514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-05-31 21:01:21.486524 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:01:21.486533 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-05-31 21:01:21.486548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-05-31 21:01:21.486558 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:01:21.486568 | orchestrator | 2025-05-31 21:01:21.486577 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-05-31 21:01:21.486587 | orchestrator | Saturday 31 May 2025 20:57:55 +0000 (0:00:01.495) 0:02:58.193 ********** 2025-05-31 21:01:21.486596 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:01:21.486606 | orchestrator | changed: [testbed-node-1] 2025-05-31 21:01:21.486615 | orchestrator | changed: [testbed-node-2] 2025-05-31 21:01:21.486625 | orchestrator | 2025-05-31 21:01:21.486634 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-05-31 21:01:21.486644 | orchestrator | Saturday 31 May 2025 20:57:56 +0000 (0:00:01.365) 0:02:59.558 ********** 2025-05-31 21:01:21.486695 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:01:21.486707 | orchestrator | changed: [testbed-node-1] 2025-05-31 21:01:21.486717 | orchestrator | changed: [testbed-node-2] 2025-05-31 21:01:21.486726 | orchestrator | 2025-05-31 21:01:21.486736 | orchestrator | TASK [include_role : manila] *************************************************** 2025-05-31 21:01:21.486746 | orchestrator | Saturday 31 May 2025 20:57:58 +0000 (0:00:02.032) 0:03:01.591 ********** 2025-05-31 21:01:21.486827 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 21:01:21.486841 | orchestrator | 2025-05-31 21:01:21.486851 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-05-31 21:01:21.486881 | orchestrator | Saturday 31 May 2025 20:57:59 +0000 (0:00:01.159) 0:03:02.751 ********** 2025-05-31 21:01:21.486893 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-05-31 21:01:21.486904 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-05-31 21:01:21.486922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.486932 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.486947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.487019 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.487033 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.487044 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-05-31 21:01:21.487060 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.487071 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.487085 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.487155 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.487170 | orchestrator | 2025-05-31 21:01:21.487180 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-05-31 21:01:21.487190 | orchestrator | Saturday 31 May 2025 20:58:04 +0000 (0:00:04.406) 0:03:07.157 ********** 2025-05-31 21:01:21.487200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-05-31 21:01:21.487218 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.487228 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.487239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.487249 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:01:21.487263 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-05-31 21:01:21.487332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.487346 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.487363 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.487373 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:01:21.487383 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-05-31 21:01:21.487393 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.487412 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.487481 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.487495 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:01:21.487505 | orchestrator | 2025-05-31 21:01:21.487515 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-05-31 21:01:21.487524 | orchestrator | Saturday 31 May 2025 20:58:04 +0000 (0:00:00.723) 0:03:07.881 ********** 2025-05-31 21:01:21.487542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-05-31 21:01:21.487552 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-05-31 21:01:21.487562 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:01:21.487572 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-05-31 21:01:21.487582 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-05-31 21:01:21.487591 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:01:21.487601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-05-31 21:01:21.487611 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-05-31 21:01:21.487620 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:01:21.487630 | orchestrator | 2025-05-31 21:01:21.487640 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-05-31 21:01:21.487650 | orchestrator | Saturday 31 May 2025 20:58:05 +0000 (0:00:00.870) 0:03:08.752 ********** 2025-05-31 21:01:21.487660 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:01:21.487670 | orchestrator | changed: [testbed-node-1] 2025-05-31 21:01:21.487680 | orchestrator | changed: [testbed-node-2] 2025-05-31 21:01:21.487690 | orchestrator | 2025-05-31 21:01:21.487700 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-05-31 21:01:21.487709 | orchestrator | Saturday 31 May 2025 20:58:07 +0000 (0:00:01.708) 0:03:10.460 ********** 2025-05-31 21:01:21.487719 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:01:21.487729 | orchestrator | changed: [testbed-node-1] 2025-05-31 21:01:21.487738 | orchestrator | changed: [testbed-node-2] 2025-05-31 21:01:21.487748 | orchestrator | 2025-05-31 21:01:21.487757 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-05-31 21:01:21.487767 | orchestrator | Saturday 31 May 2025 20:58:09 +0000 (0:00:02.122) 0:03:12.583 ********** 2025-05-31 21:01:21.487777 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 21:01:21.487786 | orchestrator | 2025-05-31 21:01:21.487796 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-05-31 21:01:21.487806 | orchestrator | Saturday 31 May 2025 20:58:10 +0000 (0:00:01.052) 0:03:13.636 ********** 2025-05-31 21:01:21.487816 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-31 21:01:21.487826 | orchestrator | 2025-05-31 21:01:21.487835 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-05-31 21:01:21.487844 | orchestrator | Saturday 31 May 2025 20:58:13 +0000 (0:00:03.069) 0:03:16.705 ********** 2025-05-31 21:01:21.487941 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-31 21:01:21.487968 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-31 21:01:21.487980 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-31 21:01:21.487995 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-31 21:01:21.488011 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:01:21.488021 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:01:21.488094 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-31 21:01:21.488109 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-31 21:01:21.488119 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:01:21.488129 | orchestrator | 2025-05-31 21:01:21.488139 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-05-31 21:01:21.488149 | orchestrator | Saturday 31 May 2025 20:58:16 +0000 (0:00:02.481) 0:03:19.186 ********** 2025-05-31 21:01:21.488159 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-31 21:01:21.488288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-31 21:01:21.488315 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:01:21.488326 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-31 21:01:21.488337 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-31 21:01:21.488347 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:01:21.488437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-31 21:01:21.488453 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-31 21:01:21.488464 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:01:21.488474 | orchestrator | 2025-05-31 21:01:21.488483 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-05-31 21:01:21.488493 | orchestrator | Saturday 31 May 2025 20:58:18 +0000 (0:00:02.076) 0:03:21.263 ********** 2025-05-31 21:01:21.488503 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-31 21:01:21.488514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-31 21:01:21.488530 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:01:21.488545 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-31 21:01:21.488555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-31 21:01:21.488565 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:01:21.488636 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-31 21:01:21.488651 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-31 21:01:21.488661 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:01:21.488670 | orchestrator | 2025-05-31 21:01:21.488680 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-05-31 21:01:21.488690 | orchestrator | Saturday 31 May 2025 20:58:20 +0000 (0:00:02.444) 0:03:23.707 ********** 2025-05-31 21:01:21.488700 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:01:21.488710 | orchestrator | changed: [testbed-node-1] 2025-05-31 21:01:21.488719 | orchestrator | changed: [testbed-node-2] 2025-05-31 21:01:21.488729 | orchestrator | 2025-05-31 21:01:21.488739 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-05-31 21:01:21.488749 | orchestrator | Saturday 31 May 2025 20:58:22 +0000 (0:00:02.078) 0:03:25.785 ********** 2025-05-31 21:01:21.488758 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:01:21.488768 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:01:21.488778 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:01:21.488788 | orchestrator | 2025-05-31 21:01:21.488797 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-05-31 21:01:21.488807 | orchestrator | Saturday 31 May 2025 20:58:24 +0000 (0:00:01.530) 0:03:27.316 ********** 2025-05-31 21:01:21.488817 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:01:21.488826 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:01:21.488836 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:01:21.488845 | orchestrator | 2025-05-31 21:01:21.488855 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-05-31 21:01:21.488931 | orchestrator | Saturday 31 May 2025 20:58:24 +0000 (0:00:00.358) 0:03:27.674 ********** 2025-05-31 21:01:21.488942 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 21:01:21.488951 | orchestrator | 2025-05-31 21:01:21.488961 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-05-31 21:01:21.488972 | orchestrator | Saturday 31 May 2025 20:58:25 +0000 (0:00:01.166) 0:03:28.840 ********** 2025-05-31 21:01:21.488983 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-05-31 21:01:21.488999 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-05-31 21:01:21.489082 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-05-31 21:01:21.489098 | orchestrator | 2025-05-31 21:01:21.489108 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-05-31 21:01:21.489118 | orchestrator | Saturday 31 May 2025 20:58:27 +0000 (0:00:01.736) 0:03:30.577 ********** 2025-05-31 21:01:21.489128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-05-31 21:01:21.489138 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-05-31 21:01:21.489155 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:01:21.489165 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:01:21.489175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-05-31 21:01:21.489185 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:01:21.489193 | orchestrator | 2025-05-31 21:01:21.489201 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-05-31 21:01:21.489209 | orchestrator | Saturday 31 May 2025 20:58:27 +0000 (0:00:00.404) 0:03:30.981 ********** 2025-05-31 21:01:21.489222 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-05-31 21:01:21.489232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-05-31 21:01:21.489240 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:01:21.489248 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:01:21.489308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-05-31 21:01:21.489320 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:01:21.489328 | orchestrator | 2025-05-31 21:01:21.489335 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-05-31 21:01:21.489343 | orchestrator | Saturday 31 May 2025 20:58:28 +0000 (0:00:00.588) 0:03:31.569 ********** 2025-05-31 21:01:21.489352 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:01:21.489359 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:01:21.489367 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:01:21.489375 | orchestrator | 2025-05-31 21:01:21.489383 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-05-31 21:01:21.489390 | orchestrator | Saturday 31 May 2025 20:58:29 +0000 (0:00:00.772) 0:03:32.341 ********** 2025-05-31 21:01:21.489399 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:01:21.489406 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:01:21.489414 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:01:21.489423 | orchestrator | 2025-05-31 21:01:21.489431 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-05-31 21:01:21.489439 | orchestrator | Saturday 31 May 2025 20:58:30 +0000 (0:00:01.299) 0:03:33.641 ********** 2025-05-31 21:01:21.489455 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:01:21.489463 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:01:21.489471 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:01:21.489478 | orchestrator | 2025-05-31 21:01:21.489486 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-05-31 21:01:21.489494 | orchestrator | Saturday 31 May 2025 20:58:30 +0000 (0:00:00.333) 0:03:33.975 ********** 2025-05-31 21:01:21.489502 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 21:01:21.489510 | orchestrator | 2025-05-31 21:01:21.489517 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-05-31 21:01:21.489525 | orchestrator | Saturday 31 May 2025 20:58:32 +0000 (0:00:01.612) 0:03:35.587 ********** 2025-05-31 21:01:21.489534 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-31 21:01:21.489543 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.489555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.489616 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.489634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-31 21:01:21.489643 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.489652 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-31 21:01:21.489661 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-31 21:01:21.489673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.489731 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-31 21:01:21.489749 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-31 21:01:21.489757 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.489766 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.489774 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.489789 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 21:01:21.489848 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.489880 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-31 21:01:21.489889 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-31 21:01:21.489898 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.489906 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.489919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-31 21:01:21.489982 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-31 21:01:21.490000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-31 21:01:21.490009 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-31 21:01:21.490042 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.490054 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.490067 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-31 21:01:21.490129 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-31 21:01:21.490148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.490157 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.490165 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.490173 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 21:01:21.490185 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.490251 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-31 21:01:21.490264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-31 21:01:21.490272 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.490281 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.490289 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-31 21:01:21.490302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-31 21:01:21.490366 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-31 21:01:21.490379 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-31 21:01:21.490387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.490395 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.490404 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-31 21:01:21.490416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.490430 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 21:01:21.490488 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-31 21:01:21.490500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.490508 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-31 21:01:21.490517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-31 21:01:21.490529 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.490542 | orchestrator | 2025-05-31 21:01:21.490551 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-05-31 21:01:21.490559 | orchestrator | Saturday 31 May 2025 20:58:36 +0000 (0:00:04.243) 0:03:39.831 ********** 2025-05-31 21:01:21.490618 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-31 21:01:21.490630 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.490638 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.490646 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.490658 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-31 21:01:21.490675 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.490734 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-31 21:01:21.490747 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-31 21:01:21.490755 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-31 21:01:21.490763 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.490771 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.490788 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-31 21:01:21.490847 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.490871 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.490881 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.490890 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 21:01:21.490899 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-31 21:01:21.490916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-31 21:01:21.490979 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.490991 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.491000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-31 21:01:21.491008 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-31 21:01:21.491016 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-31 21:01:21.491034 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.491092 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-31 21:01:21.491104 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.491112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-31 21:01:21.491120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.491134 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.491146 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.491204 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-31 21:01:21.491216 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:01:21.491225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-31 21:01:21.491233 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.491242 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.491255 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 21:01:21.491267 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-31 21:01:21.491277 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-31 21:01:21.491335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-31 21:01:21.491346 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.491354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.491368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-31 21:01:21.491380 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-31 21:01:21.491411 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-31 21:01:21.491421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.491429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.491437 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-31 21:01:21.491450 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:01:21.491458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-05-31 21:01:21.491466 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.491499 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-31 21:01:21.491509 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-31 21:01:21.491518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/neutron-ovn-vpn-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.491526 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:01:21.491541 | orchestrator | 2025-05-31 21:01:21.491549 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-05-31 21:01:21.491558 | orchestrator | Saturday 31 May 2025 20:58:38 +0000 (0:00:01.512) 0:03:41.344 ********** 2025-05-31 21:01:21.491566 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-05-31 21:01:21.491575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-05-31 21:01:21.491582 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:01:21.491590 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-05-31 21:01:21.491598 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-05-31 21:01:21.491606 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:01:21.491614 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-05-31 21:01:21.491622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-05-31 21:01:21.491629 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:01:21.491637 | orchestrator | 2025-05-31 21:01:21.491645 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-05-31 21:01:21.491653 | orchestrator | Saturday 31 May 2025 20:58:40 +0000 (0:00:02.097) 0:03:43.441 ********** 2025-05-31 21:01:21.491661 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:01:21.491669 | orchestrator | changed: [testbed-node-1] 2025-05-31 21:01:21.491679 | orchestrator | changed: [testbed-node-2] 2025-05-31 21:01:21.491687 | orchestrator | 2025-05-31 21:01:21.491695 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-05-31 21:01:21.491703 | orchestrator | Saturday 31 May 2025 20:58:41 +0000 (0:00:01.241) 0:03:44.683 ********** 2025-05-31 21:01:21.491710 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:01:21.491718 | orchestrator | changed: [testbed-node-1] 2025-05-31 21:01:21.491726 | orchestrator | changed: [testbed-node-2] 2025-05-31 21:01:21.491733 | orchestrator | 2025-05-31 21:01:21.491741 | orchestrator | TASK [include_role : placement] ************************************************ 2025-05-31 21:01:21.491749 | orchestrator | Saturday 31 May 2025 20:58:43 +0000 (0:00:02.191) 0:03:46.874 ********** 2025-05-31 21:01:21.491757 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 21:01:21.491764 | orchestrator | 2025-05-31 21:01:21.491772 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-05-31 21:01:21.491779 | orchestrator | Saturday 31 May 2025 20:58:45 +0000 (0:00:01.257) 0:03:48.133 ********** 2025-05-31 21:01:21.491809 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-31 21:01:21.491824 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-31 21:01:21.491833 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-31 21:01:21.491841 | orchestrator | 2025-05-31 21:01:21.491849 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-05-31 21:01:21.491901 | orchestrator | Saturday 31 May 2025 20:58:48 +0000 (0:00:03.639) 0:03:51.773 ********** 2025-05-31 21:01:21.491916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-31 21:01:21.491924 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:01:21.491958 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-31 21:01:21.491974 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:01:21.491983 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-31 21:01:21.491991 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:01:21.491999 | orchestrator | 2025-05-31 21:01:21.492006 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-05-31 21:01:21.492015 | orchestrator | Saturday 31 May 2025 20:58:49 +0000 (0:00:00.534) 0:03:52.307 ********** 2025-05-31 21:01:21.492023 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-31 21:01:21.492031 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-31 21:01:21.492040 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:01:21.492048 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-31 21:01:21.492056 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-31 21:01:21.492064 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:01:21.492072 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-31 21:01:21.492080 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-31 21:01:21.492088 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:01:21.492096 | orchestrator | 2025-05-31 21:01:21.492104 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-05-31 21:01:21.492112 | orchestrator | Saturday 31 May 2025 20:58:50 +0000 (0:00:00.777) 0:03:53.085 ********** 2025-05-31 21:01:21.492123 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:01:21.492131 | orchestrator | changed: [testbed-node-2] 2025-05-31 21:01:21.492139 | orchestrator | changed: [testbed-node-1] 2025-05-31 21:01:21.492147 | orchestrator | 2025-05-31 21:01:21.492155 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-05-31 21:01:21.492162 | orchestrator | Saturday 31 May 2025 20:58:51 +0000 (0:00:01.744) 0:03:54.829 ********** 2025-05-31 21:01:21.492170 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:01:21.492178 | orchestrator | changed: [testbed-node-1] 2025-05-31 21:01:21.492185 | orchestrator | changed: [testbed-node-2] 2025-05-31 21:01:21.492193 | orchestrator | 2025-05-31 21:01:21.492206 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-05-31 21:01:21.492213 | orchestrator | Saturday 31 May 2025 20:58:54 +0000 (0:00:02.277) 0:03:57.107 ********** 2025-05-31 21:01:21.492221 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 21:01:21.492229 | orchestrator | 2025-05-31 21:01:21.492236 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-05-31 21:01:21.492244 | orchestrator | Saturday 31 May 2025 20:58:55 +0000 (0:00:01.377) 0:03:58.485 ********** 2025-05-31 21:01:21.492277 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-31 21:01:21.492289 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.492297 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.492310 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-31 21:01:21.492347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.492356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.492365 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-31 21:01:21.492374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.492382 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.492390 | orchestrator | 2025-05-31 21:01:21.492398 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-05-31 21:01:21.492411 | orchestrator | Saturday 31 May 2025 20:59:00 +0000 (0:00:04.958) 0:04:03.444 ********** 2025-05-31 21:01:21.492439 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-31 21:01:21.492447 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.492454 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.492461 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:01:21.492482 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-31 21:01:21.492493 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.492505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.492512 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:01:21.492540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-31 21:01:21.492550 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.492557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.492564 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:01:21.492571 | orchestrator | 2025-05-31 21:01:21.492577 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-05-31 21:01:21.492584 | orchestrator | Saturday 31 May 2025 20:59:01 +0000 (0:00:01.023) 0:04:04.467 ********** 2025-05-31 21:01:21.492591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-31 21:01:21.492602 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-31 21:01:21.492610 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-31 21:01:21.492620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-31 21:01:21.492627 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:01:21.492634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-31 21:01:21.492640 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-31 21:01:21.492666 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-31 21:01:21.492673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-31 21:01:21.492680 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-31 21:01:21.492687 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:01:21.492693 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-31 21:01:21.492700 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-31 21:01:21.492707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-31 21:01:21.492714 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:01:21.492720 | orchestrator | 2025-05-31 21:01:21.492727 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-05-31 21:01:21.492733 | orchestrator | Saturday 31 May 2025 20:59:02 +0000 (0:00:00.876) 0:04:05.344 ********** 2025-05-31 21:01:21.492740 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:01:21.492746 | orchestrator | changed: [testbed-node-1] 2025-05-31 21:01:21.492753 | orchestrator | changed: [testbed-node-2] 2025-05-31 21:01:21.492759 | orchestrator | 2025-05-31 21:01:21.492766 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-05-31 21:01:21.492773 | orchestrator | Saturday 31 May 2025 20:59:03 +0000 (0:00:01.597) 0:04:06.942 ********** 2025-05-31 21:01:21.492779 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:01:21.492786 | orchestrator | changed: [testbed-node-1] 2025-05-31 21:01:21.492792 | orchestrator | changed: [testbed-node-2] 2025-05-31 21:01:21.492798 | orchestrator | 2025-05-31 21:01:21.492805 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-05-31 21:01:21.492819 | orchestrator | Saturday 31 May 2025 20:59:06 +0000 (0:00:02.119) 0:04:09.062 ********** 2025-05-31 21:01:21.492826 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 21:01:21.492833 | orchestrator | 2025-05-31 21:01:21.492839 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-05-31 21:01:21.492846 | orchestrator | Saturday 31 May 2025 20:59:07 +0000 (0:00:01.601) 0:04:10.663 ********** 2025-05-31 21:01:21.492852 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-05-31 21:01:21.492928 | orchestrator | 2025-05-31 21:01:21.492936 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-05-31 21:01:21.492943 | orchestrator | Saturday 31 May 2025 20:59:08 +0000 (0:00:01.149) 0:04:11.813 ********** 2025-05-31 21:01:21.492950 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-05-31 21:01:21.492961 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-05-31 21:01:21.492968 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-05-31 21:01:21.492975 | orchestrator | 2025-05-31 21:01:21.493003 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-05-31 21:01:21.493011 | orchestrator | Saturday 31 May 2025 20:59:12 +0000 (0:00:03.945) 0:04:15.758 ********** 2025-05-31 21:01:21.493018 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-31 21:01:21.493025 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:01:21.493032 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-31 21:01:21.493039 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:01:21.493051 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-31 21:01:21.493058 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:01:21.493064 | orchestrator | 2025-05-31 21:01:21.493071 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-05-31 21:01:21.493077 | orchestrator | Saturday 31 May 2025 20:59:14 +0000 (0:00:01.331) 0:04:17.090 ********** 2025-05-31 21:01:21.493084 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-31 21:01:21.493091 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-31 21:01:21.493098 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:01:21.493105 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-31 21:01:21.493112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-31 21:01:21.493119 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:01:21.493129 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-31 21:01:21.493136 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-31 21:01:21.493143 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:01:21.493149 | orchestrator | 2025-05-31 21:01:21.493156 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-05-31 21:01:21.493162 | orchestrator | Saturday 31 May 2025 20:59:16 +0000 (0:00:02.076) 0:04:19.166 ********** 2025-05-31 21:01:21.493169 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:01:21.493175 | orchestrator | changed: [testbed-node-1] 2025-05-31 21:01:21.493182 | orchestrator | changed: [testbed-node-2] 2025-05-31 21:01:21.493188 | orchestrator | 2025-05-31 21:01:21.493195 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-05-31 21:01:21.493201 | orchestrator | Saturday 31 May 2025 20:59:18 +0000 (0:00:02.378) 0:04:21.545 ********** 2025-05-31 21:01:21.493208 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:01:21.493215 | orchestrator | changed: [testbed-node-1] 2025-05-31 21:01:21.493221 | orchestrator | changed: [testbed-node-2] 2025-05-31 21:01:21.493228 | orchestrator | 2025-05-31 21:01:21.493253 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-05-31 21:01:21.493261 | orchestrator | Saturday 31 May 2025 20:59:21 +0000 (0:00:02.982) 0:04:24.527 ********** 2025-05-31 21:01:21.493268 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-05-31 21:01:21.493275 | orchestrator | 2025-05-31 21:01:21.493281 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-05-31 21:01:21.493293 | orchestrator | Saturday 31 May 2025 20:59:22 +0000 (0:00:00.863) 0:04:25.391 ********** 2025-05-31 21:01:21.493300 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-31 21:01:21.493307 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:01:21.493314 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-31 21:01:21.493321 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:01:21.493327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-31 21:01:21.493334 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:01:21.493341 | orchestrator | 2025-05-31 21:01:21.493347 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-05-31 21:01:21.493354 | orchestrator | Saturday 31 May 2025 20:59:23 +0000 (0:00:01.343) 0:04:26.734 ********** 2025-05-31 21:01:21.493360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-31 21:01:21.493367 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:01:21.493377 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-31 21:01:21.493384 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:01:21.493391 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-31 21:01:21.493402 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:01:21.493408 | orchestrator | 2025-05-31 21:01:21.493434 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-05-31 21:01:21.493442 | orchestrator | Saturday 31 May 2025 20:59:25 +0000 (0:00:01.700) 0:04:28.435 ********** 2025-05-31 21:01:21.493448 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:01:21.493455 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:01:21.493461 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:01:21.493468 | orchestrator | 2025-05-31 21:01:21.493475 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-05-31 21:01:21.493481 | orchestrator | Saturday 31 May 2025 20:59:26 +0000 (0:00:01.247) 0:04:29.682 ********** 2025-05-31 21:01:21.493488 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:01:21.493494 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:01:21.493501 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:01:21.493507 | orchestrator | 2025-05-31 21:01:21.493513 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-05-31 21:01:21.493520 | orchestrator | Saturday 31 May 2025 20:59:28 +0000 (0:00:02.269) 0:04:31.952 ********** 2025-05-31 21:01:21.493526 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:01:21.493533 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:01:21.493539 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:01:21.493546 | orchestrator | 2025-05-31 21:01:21.493553 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-05-31 21:01:21.493559 | orchestrator | Saturday 31 May 2025 20:59:32 +0000 (0:00:03.046) 0:04:34.998 ********** 2025-05-31 21:01:21.493566 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-05-31 21:01:21.493572 | orchestrator | 2025-05-31 21:01:21.493579 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-05-31 21:01:21.493585 | orchestrator | Saturday 31 May 2025 20:59:33 +0000 (0:00:01.075) 0:04:36.073 ********** 2025-05-31 21:01:21.493592 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-31 21:01:21.493599 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:01:21.493606 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-31 21:01:21.493613 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:01:21.493619 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-31 21:01:21.493626 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:01:21.493632 | orchestrator | 2025-05-31 21:01:21.493643 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-05-31 21:01:21.493650 | orchestrator | Saturday 31 May 2025 20:59:34 +0000 (0:00:01.050) 0:04:37.124 ********** 2025-05-31 21:01:21.493661 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-31 21:01:21.493668 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:01:21.493693 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-31 21:01:21.493701 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:01:21.493708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-31 21:01:21.493715 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:01:21.493721 | orchestrator | 2025-05-31 21:01:21.493728 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-05-31 21:01:21.493734 | orchestrator | Saturday 31 May 2025 20:59:35 +0000 (0:00:01.307) 0:04:38.432 ********** 2025-05-31 21:01:21.493741 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:01:21.493748 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:01:21.493754 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:01:21.493760 | orchestrator | 2025-05-31 21:01:21.493767 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-05-31 21:01:21.493774 | orchestrator | Saturday 31 May 2025 20:59:37 +0000 (0:00:01.841) 0:04:40.273 ********** 2025-05-31 21:01:21.493781 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:01:21.493787 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:01:21.493794 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:01:21.493801 | orchestrator | 2025-05-31 21:01:21.493807 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-05-31 21:01:21.493814 | orchestrator | Saturday 31 May 2025 20:59:39 +0000 (0:00:02.182) 0:04:42.456 ********** 2025-05-31 21:01:21.493820 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:01:21.493827 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:01:21.493833 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:01:21.493840 | orchestrator | 2025-05-31 21:01:21.493846 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-05-31 21:01:21.493853 | orchestrator | Saturday 31 May 2025 20:59:42 +0000 (0:00:03.164) 0:04:45.620 ********** 2025-05-31 21:01:21.493900 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 21:01:21.493907 | orchestrator | 2025-05-31 21:01:21.493914 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-05-31 21:01:21.493921 | orchestrator | Saturday 31 May 2025 20:59:43 +0000 (0:00:01.267) 0:04:46.888 ********** 2025-05-31 21:01:21.493928 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-31 21:01:21.493944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-31 21:01:21.493952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-31 21:01:21.493981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-31 21:01:21.493989 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.493996 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-31 21:01:21.494009 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-31 21:01:21.494040 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-31 21:01:21.494048 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-31 21:01:21.494075 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.494084 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-31 21:01:21.494091 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-31 21:01:21.494104 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-31 21:01:21.494111 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-31 21:01:21.494122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.494129 | orchestrator | 2025-05-31 21:01:21.494136 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-05-31 21:01:21.494142 | orchestrator | Saturday 31 May 2025 20:59:47 +0000 (0:00:03.704) 0:04:50.593 ********** 2025-05-31 21:01:21.494169 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-31 21:01:21.494177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-31 21:01:21.494184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-31 21:01:21.494198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-31 21:01:21.494205 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.494212 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:01:21.494223 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-31 21:01:21.494247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-31 21:01:21.494256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-31 21:01:21.494263 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-31 21:01:21.494274 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.494281 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:01:21.494288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-31 21:01:21.494298 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-31 21:01:21.494323 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-31 21:01:21.494331 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-31 21:01:21.494338 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-31 21:01:21.494349 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:01:21.494356 | orchestrator | 2025-05-31 21:01:21.494362 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-05-31 21:01:21.494369 | orchestrator | Saturday 31 May 2025 20:59:48 +0000 (0:00:00.710) 0:04:51.303 ********** 2025-05-31 21:01:21.494376 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-31 21:01:21.494383 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-31 21:01:21.494389 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:01:21.494396 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-31 21:01:21.494403 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-31 21:01:21.494409 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:01:21.494416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-31 21:01:21.494423 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-31 21:01:21.494429 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:01:21.494435 | orchestrator | 2025-05-31 21:01:21.494441 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-05-31 21:01:21.494448 | orchestrator | Saturday 31 May 2025 20:59:49 +0000 (0:00:00.900) 0:04:52.203 ********** 2025-05-31 21:01:21.494454 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:01:21.494460 | orchestrator | changed: [testbed-node-1] 2025-05-31 21:01:21.494469 | orchestrator | changed: [testbed-node-2] 2025-05-31 21:01:21.494475 | orchestrator | 2025-05-31 21:01:21.494481 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-05-31 21:01:21.494487 | orchestrator | Saturday 31 May 2025 20:59:51 +0000 (0:00:01.790) 0:04:53.994 ********** 2025-05-31 21:01:21.494493 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:01:21.494499 | orchestrator | changed: [testbed-node-1] 2025-05-31 21:01:21.494505 | orchestrator | changed: [testbed-node-2] 2025-05-31 21:01:21.494511 | orchestrator | 2025-05-31 21:01:21.494517 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-05-31 21:01:21.494523 | orchestrator | Saturday 31 May 2025 20:59:53 +0000 (0:00:02.118) 0:04:56.112 ********** 2025-05-31 21:01:21.494529 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 21:01:21.494535 | orchestrator | 2025-05-31 21:01:21.494542 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-05-31 21:01:21.494548 | orchestrator | Saturday 31 May 2025 20:59:54 +0000 (0:00:01.334) 0:04:57.447 ********** 2025-05-31 21:01:21.494572 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-31 21:01:21.494583 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-31 21:01:21.494590 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-31 21:01:21.494600 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-31 21:01:21.494624 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-31 21:01:21.494636 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-31 21:01:21.494643 | orchestrator | 2025-05-31 21:01:21.494650 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-05-31 21:01:21.494656 | orchestrator | Saturday 31 May 2025 20:59:59 +0000 (0:00:05.533) 0:05:02.981 ********** 2025-05-31 21:01:21.494662 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-31 21:01:21.494672 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-31 21:01:21.494679 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:01:21.494702 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-31 21:01:21.494714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-31 21:01:21.494720 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:01:21.494727 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-31 21:01:21.494733 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-31 21:01:21.494743 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:01:21.494750 | orchestrator | 2025-05-31 21:01:21.494757 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-05-31 21:01:21.494767 | orchestrator | Saturday 31 May 2025 21:00:01 +0000 (0:00:01.068) 0:05:04.049 ********** 2025-05-31 21:01:21.494777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-05-31 21:01:21.494793 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-31 21:01:21.494803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-31 21:01:21.494837 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:01:21.494846 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-05-31 21:01:21.494853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-31 21:01:21.494874 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-31 21:01:21.494881 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:01:21.494888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-05-31 21:01:21.494894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-31 21:01:21.494900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-31 21:01:21.494907 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:01:21.494913 | orchestrator | 2025-05-31 21:01:21.494919 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-05-31 21:01:21.494925 | orchestrator | Saturday 31 May 2025 21:00:01 +0000 (0:00:00.874) 0:05:04.923 ********** 2025-05-31 21:01:21.494931 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:01:21.494937 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:01:21.494943 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:01:21.494949 | orchestrator | 2025-05-31 21:01:21.494955 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-05-31 21:01:21.494961 | orchestrator | Saturday 31 May 2025 21:00:02 +0000 (0:00:00.458) 0:05:05.382 ********** 2025-05-31 21:01:21.494967 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:01:21.494974 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:01:21.494980 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:01:21.494986 | orchestrator | 2025-05-31 21:01:21.494992 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-05-31 21:01:21.494998 | orchestrator | Saturday 31 May 2025 21:00:03 +0000 (0:00:01.457) 0:05:06.839 ********** 2025-05-31 21:01:21.495004 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 21:01:21.495010 | orchestrator | 2025-05-31 21:01:21.495016 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-05-31 21:01:21.495022 | orchestrator | Saturday 31 May 2025 21:00:05 +0000 (0:00:01.720) 0:05:08.559 ********** 2025-05-31 21:01:21.495032 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-31 21:01:21.495045 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-31 21:01:21.495073 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 21:01:21.495081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 21:01:21.495088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-31 21:01:21.495095 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-31 21:01:21.495101 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-31 21:01:21.495114 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 21:01:21.495124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 21:01:21.495132 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-31 21:01:21.495157 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-31 21:01:21.495165 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-31 21:01:21.495172 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 21:01:21.495178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 21:01:21.495184 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-31 21:01:21.495198 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-31 21:01:21.495209 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-31 21:01:21.495216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 21:01:21.495222 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 21:01:21.495229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-31 21:01:21.495236 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-31 21:01:21.495249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-31 21:01:21.495261 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 21:01:21.495268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 21:01:21.495274 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-31 21:01:21.495281 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-31 21:01:21.495291 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-31 21:01:21.495301 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 21:01:21.495311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 21:01:21.495317 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-31 21:01:21.495324 | orchestrator | 2025-05-31 21:01:21.495330 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-05-31 21:01:21.495336 | orchestrator | Saturday 31 May 2025 21:00:09 +0000 (0:00:04.094) 0:05:12.654 ********** 2025-05-31 21:01:21.495343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-31 21:01:21.495349 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-31 21:01:21.495360 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 21:01:21.495366 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 21:01:21.495376 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-31 21:01:21.495386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-31 21:01:21.495393 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-31 21:01:21.495400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 21:01:21.495410 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 21:01:21.495416 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-31 21:01:21.495422 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:01:21.495432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-31 21:01:21.495439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-31 21:01:21.495448 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 21:01:21.495455 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 21:01:21.495462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-31 21:01:21.495472 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-31 21:01:21.495479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-31 21:01:21.495488 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-31 21:01:21.495498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 21:01:21.495505 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 21:01:21.495512 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-31 21:01:21.495522 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-31 21:01:21.495529 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 21:01:21.495540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-31 21:01:21.495550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 21:01:21.495556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-31 21:01:21.495563 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-31 21:01:21.495576 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 21:01:21.495582 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:01:21.495589 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 21:01:21.495595 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-31 21:01:21.495602 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:01:21.495608 | orchestrator | 2025-05-31 21:01:21.495614 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-05-31 21:01:21.495620 | orchestrator | Saturday 31 May 2025 21:00:11 +0000 (0:00:01.614) 0:05:14.268 ********** 2025-05-31 21:01:21.495630 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-05-31 21:01:21.495636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-05-31 21:01:21.495643 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-31 21:01:21.495653 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-31 21:01:21.495660 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:01:21.495666 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-05-31 21:01:21.495673 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-05-31 21:01:21.495683 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-31 21:01:21.495690 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-31 21:01:21.495696 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:01:21.495702 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-05-31 21:01:21.495708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-05-31 21:01:21.495715 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-31 21:01:21.495721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-31 21:01:21.495727 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:01:21.495734 | orchestrator | 2025-05-31 21:01:21.495740 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-05-31 21:01:21.495746 | orchestrator | Saturday 31 May 2025 21:00:12 +0000 (0:00:00.992) 0:05:15.260 ********** 2025-05-31 21:01:21.495752 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:01:21.495758 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:01:21.495764 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:01:21.495770 | orchestrator | 2025-05-31 21:01:21.495776 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-05-31 21:01:21.495782 | orchestrator | Saturday 31 May 2025 21:00:12 +0000 (0:00:00.458) 0:05:15.719 ********** 2025-05-31 21:01:21.495788 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:01:21.495795 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:01:21.495801 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:01:21.495807 | orchestrator | 2025-05-31 21:01:21.495813 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-05-31 21:01:21.495819 | orchestrator | Saturday 31 May 2025 21:00:14 +0000 (0:00:01.824) 0:05:17.544 ********** 2025-05-31 21:01:21.495825 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 21:01:21.495831 | orchestrator | 2025-05-31 21:01:21.495837 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-05-31 21:01:21.495843 | orchestrator | Saturday 31 May 2025 21:00:16 +0000 (0:00:01.847) 0:05:19.391 ********** 2025-05-31 21:01:21.495856 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-31 21:01:21.495880 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-31 21:01:21.495888 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-31 21:01:21.495895 | orchestrator | 2025-05-31 21:01:21.495901 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-05-31 21:01:21.495907 | orchestrator | Saturday 31 May 2025 21:00:19 +0000 (0:00:02.639) 0:05:22.031 ********** 2025-05-31 21:01:21.495913 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-05-31 21:01:21.495923 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:01:21.495933 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-05-31 21:01:21.495944 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:01:21.495951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-05-31 21:01:21.495960 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:01:21.495967 | orchestrator | 2025-05-31 21:01:21.495973 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-05-31 21:01:21.495979 | orchestrator | Saturday 31 May 2025 21:00:19 +0000 (0:00:00.435) 0:05:22.466 ********** 2025-05-31 21:01:21.495986 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-05-31 21:01:21.495992 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:01:21.495998 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-05-31 21:01:21.496004 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:01:21.496010 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-05-31 21:01:21.496016 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:01:21.496022 | orchestrator | 2025-05-31 21:01:21.496028 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-05-31 21:01:21.496034 | orchestrator | Saturday 31 May 2025 21:00:20 +0000 (0:00:00.995) 0:05:23.462 ********** 2025-05-31 21:01:21.496041 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:01:21.496047 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:01:21.496053 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:01:21.496058 | orchestrator | 2025-05-31 21:01:21.496064 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-05-31 21:01:21.496071 | orchestrator | Saturday 31 May 2025 21:00:20 +0000 (0:00:00.474) 0:05:23.937 ********** 2025-05-31 21:01:21.496077 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:01:21.496083 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:01:21.496089 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:01:21.496095 | orchestrator | 2025-05-31 21:01:21.496101 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-05-31 21:01:21.496107 | orchestrator | Saturday 31 May 2025 21:00:22 +0000 (0:00:01.481) 0:05:25.418 ********** 2025-05-31 21:01:21.496113 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 21:01:21.496123 | orchestrator | 2025-05-31 21:01:21.496129 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-05-31 21:01:21.496135 | orchestrator | Saturday 31 May 2025 21:00:24 +0000 (0:00:01.823) 0:05:27.241 ********** 2025-05-31 21:01:21.496145 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-05-31 21:01:21.496156 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-05-31 21:01:21.496163 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-05-31 21:01:21.496169 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-05-31 21:01:21.496179 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-05-31 21:01:21.496193 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-05-31 21:01:21.496199 | orchestrator | 2025-05-31 21:01:21.496206 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-05-31 21:01:21.496212 | orchestrator | Saturday 31 May 2025 21:00:30 +0000 (0:00:06.466) 0:05:33.708 ********** 2025-05-31 21:01:21.496218 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-05-31 21:01:21.496225 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-05-31 21:01:21.496231 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:01:21.496241 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-05-31 21:01:21.496251 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-05-31 21:01:21.496257 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:01:21.496264 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-05-31 21:01:21.496270 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-05-31 21:01:21.496276 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:01:21.496283 | orchestrator | 2025-05-31 21:01:21.496289 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-05-31 21:01:21.496299 | orchestrator | Saturday 31 May 2025 21:00:31 +0000 (0:00:00.612) 0:05:34.321 ********** 2025-05-31 21:01:21.496318 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-31 21:01:21.496326 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-31 21:01:21.496332 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-31 21:01:21.496338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-31 21:01:21.496345 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:01:21.496356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-31 21:01:21.496363 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-31 21:01:21.496369 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-31 21:01:21.496375 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-31 21:01:21.496384 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-31 21:01:21.496392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-31 21:01:21.496398 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:01:21.496404 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-31 21:01:21.496411 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-31 21:01:21.496417 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:01:21.496423 | orchestrator | 2025-05-31 21:01:21.496429 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-05-31 21:01:21.496435 | orchestrator | Saturday 31 May 2025 21:00:32 +0000 (0:00:01.574) 0:05:35.895 ********** 2025-05-31 21:01:21.496441 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:01:21.496447 | orchestrator | changed: [testbed-node-1] 2025-05-31 21:01:21.496453 | orchestrator | changed: [testbed-node-2] 2025-05-31 21:01:21.496459 | orchestrator | 2025-05-31 21:01:21.496465 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-05-31 21:01:21.496471 | orchestrator | Saturday 31 May 2025 21:00:34 +0000 (0:00:01.308) 0:05:37.204 ********** 2025-05-31 21:01:21.496478 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:01:21.496488 | orchestrator | changed: [testbed-node-1] 2025-05-31 21:01:21.496494 | orchestrator | changed: [testbed-node-2] 2025-05-31 21:01:21.496500 | orchestrator | 2025-05-31 21:01:21.496506 | orchestrator | TASK [include_role : swift] **************************************************** 2025-05-31 21:01:21.496513 | orchestrator | Saturday 31 May 2025 21:00:36 +0000 (0:00:02.160) 0:05:39.364 ********** 2025-05-31 21:01:21.496519 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:01:21.496525 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:01:21.496530 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:01:21.496537 | orchestrator | 2025-05-31 21:01:21.496542 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-05-31 21:01:21.496549 | orchestrator | Saturday 31 May 2025 21:00:36 +0000 (0:00:00.317) 0:05:39.682 ********** 2025-05-31 21:01:21.496554 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:01:21.496561 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:01:21.496567 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:01:21.496573 | orchestrator | 2025-05-31 21:01:21.496579 | orchestrator | TASK [include_role : trove] **************************************************** 2025-05-31 21:01:21.496585 | orchestrator | Saturday 31 May 2025 21:00:37 +0000 (0:00:00.617) 0:05:40.299 ********** 2025-05-31 21:01:21.496591 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:01:21.496597 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:01:21.496603 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:01:21.496609 | orchestrator | 2025-05-31 21:01:21.496615 | orchestrator | TASK [include_role : venus] **************************************************** 2025-05-31 21:01:21.496621 | orchestrator | Saturday 31 May 2025 21:00:37 +0000 (0:00:00.307) 0:05:40.606 ********** 2025-05-31 21:01:21.496627 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:01:21.496633 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:01:21.496639 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:01:21.496645 | orchestrator | 2025-05-31 21:01:21.496651 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-05-31 21:01:21.496657 | orchestrator | Saturday 31 May 2025 21:00:37 +0000 (0:00:00.299) 0:05:40.905 ********** 2025-05-31 21:01:21.496663 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:01:21.496669 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:01:21.496675 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:01:21.496681 | orchestrator | 2025-05-31 21:01:21.496687 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-05-31 21:01:21.496693 | orchestrator | Saturday 31 May 2025 21:00:38 +0000 (0:00:00.303) 0:05:41.209 ********** 2025-05-31 21:01:21.496699 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:01:21.496705 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:01:21.496711 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:01:21.496717 | orchestrator | 2025-05-31 21:01:21.496723 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-05-31 21:01:21.496733 | orchestrator | Saturday 31 May 2025 21:00:39 +0000 (0:00:00.795) 0:05:42.005 ********** 2025-05-31 21:01:21.496739 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:01:21.496745 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:01:21.496751 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:01:21.496757 | orchestrator | 2025-05-31 21:01:21.496763 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-05-31 21:01:21.496769 | orchestrator | Saturday 31 May 2025 21:00:39 +0000 (0:00:00.655) 0:05:42.661 ********** 2025-05-31 21:01:21.496775 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:01:21.496781 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:01:21.496787 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:01:21.496793 | orchestrator | 2025-05-31 21:01:21.496799 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-05-31 21:01:21.496805 | orchestrator | Saturday 31 May 2025 21:00:40 +0000 (0:00:00.396) 0:05:43.058 ********** 2025-05-31 21:01:21.496811 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:01:21.496817 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:01:21.496828 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:01:21.496834 | orchestrator | 2025-05-31 21:01:21.496840 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-05-31 21:01:21.496846 | orchestrator | Saturday 31 May 2025 21:00:41 +0000 (0:00:01.241) 0:05:44.299 ********** 2025-05-31 21:01:21.496852 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:01:21.496892 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:01:21.496903 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:01:21.496910 | orchestrator | 2025-05-31 21:01:21.496916 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-05-31 21:01:21.496923 | orchestrator | Saturday 31 May 2025 21:00:42 +0000 (0:00:00.901) 0:05:45.201 ********** 2025-05-31 21:01:21.496929 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:01:21.496935 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:01:21.496941 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:01:21.496946 | orchestrator | 2025-05-31 21:01:21.496952 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-05-31 21:01:21.496957 | orchestrator | Saturday 31 May 2025 21:00:43 +0000 (0:00:00.929) 0:05:46.130 ********** 2025-05-31 21:01:21.496963 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:01:21.496968 | orchestrator | changed: [testbed-node-1] 2025-05-31 21:01:21.496973 | orchestrator | changed: [testbed-node-2] 2025-05-31 21:01:21.496979 | orchestrator | 2025-05-31 21:01:21.496984 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-05-31 21:01:21.496989 | orchestrator | Saturday 31 May 2025 21:00:48 +0000 (0:00:05.759) 0:05:51.890 ********** 2025-05-31 21:01:21.496995 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:01:21.497000 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:01:21.497005 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:01:21.497011 | orchestrator | 2025-05-31 21:01:21.497016 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-05-31 21:01:21.497021 | orchestrator | Saturday 31 May 2025 21:00:51 +0000 (0:00:02.714) 0:05:54.604 ********** 2025-05-31 21:01:21.497027 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:01:21.497032 | orchestrator | changed: [testbed-node-1] 2025-05-31 21:01:21.497038 | orchestrator | changed: [testbed-node-2] 2025-05-31 21:01:21.497043 | orchestrator | 2025-05-31 21:01:21.497048 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-05-31 21:01:21.497054 | orchestrator | Saturday 31 May 2025 21:01:00 +0000 (0:00:08.942) 0:06:03.547 ********** 2025-05-31 21:01:21.497059 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:01:21.497064 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:01:21.497070 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:01:21.497075 | orchestrator | 2025-05-31 21:01:21.497080 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-05-31 21:01:21.497086 | orchestrator | Saturday 31 May 2025 21:01:05 +0000 (0:00:04.726) 0:06:08.274 ********** 2025-05-31 21:01:21.497091 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:01:21.497096 | orchestrator | changed: [testbed-node-1] 2025-05-31 21:01:21.497101 | orchestrator | changed: [testbed-node-2] 2025-05-31 21:01:21.497107 | orchestrator | 2025-05-31 21:01:21.497112 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-05-31 21:01:21.497117 | orchestrator | Saturday 31 May 2025 21:01:14 +0000 (0:00:09.250) 0:06:17.525 ********** 2025-05-31 21:01:21.497123 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:01:21.497128 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:01:21.497133 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:01:21.497139 | orchestrator | 2025-05-31 21:01:21.497144 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-05-31 21:01:21.497149 | orchestrator | Saturday 31 May 2025 21:01:14 +0000 (0:00:00.341) 0:06:17.866 ********** 2025-05-31 21:01:21.497155 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:01:21.497160 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:01:21.497165 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:01:21.497171 | orchestrator | 2025-05-31 21:01:21.497180 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-05-31 21:01:21.497186 | orchestrator | Saturday 31 May 2025 21:01:15 +0000 (0:00:00.737) 0:06:18.604 ********** 2025-05-31 21:01:21.497191 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:01:21.497197 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:01:21.497202 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:01:21.497207 | orchestrator | 2025-05-31 21:01:21.497213 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-05-31 21:01:21.497218 | orchestrator | Saturday 31 May 2025 21:01:15 +0000 (0:00:00.349) 0:06:18.954 ********** 2025-05-31 21:01:21.497224 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:01:21.497229 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:01:21.497234 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:01:21.497239 | orchestrator | 2025-05-31 21:01:21.497244 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-05-31 21:01:21.497250 | orchestrator | Saturday 31 May 2025 21:01:16 +0000 (0:00:00.361) 0:06:19.315 ********** 2025-05-31 21:01:21.497255 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:01:21.497261 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:01:21.497266 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:01:21.497271 | orchestrator | 2025-05-31 21:01:21.497276 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-05-31 21:01:21.497282 | orchestrator | Saturday 31 May 2025 21:01:16 +0000 (0:00:00.347) 0:06:19.663 ********** 2025-05-31 21:01:21.497287 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:01:21.497296 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:01:21.497302 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:01:21.497307 | orchestrator | 2025-05-31 21:01:21.497312 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-05-31 21:01:21.497318 | orchestrator | Saturday 31 May 2025 21:01:17 +0000 (0:00:00.672) 0:06:20.335 ********** 2025-05-31 21:01:21.497323 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:01:21.497328 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:01:21.497333 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:01:21.497339 | orchestrator | 2025-05-31 21:01:21.497344 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-05-31 21:01:21.497349 | orchestrator | Saturday 31 May 2025 21:01:18 +0000 (0:00:00.867) 0:06:21.203 ********** 2025-05-31 21:01:21.497355 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:01:21.497360 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:01:21.497366 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:01:21.497372 | orchestrator | 2025-05-31 21:01:21.497377 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 21:01:21.497383 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-05-31 21:01:21.497392 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-05-31 21:01:21.497397 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-05-31 21:01:21.497403 | orchestrator | 2025-05-31 21:01:21.497408 | orchestrator | 2025-05-31 21:01:21.497414 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-31 21:01:21.497419 | orchestrator | Saturday 31 May 2025 21:01:18 +0000 (0:00:00.780) 0:06:21.983 ********** 2025-05-31 21:01:21.497425 | orchestrator | =============================================================================== 2025-05-31 21:01:21.497430 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 9.25s 2025-05-31 21:01:21.497435 | orchestrator | loadbalancer : Start backup proxysql container -------------------------- 8.94s 2025-05-31 21:01:21.497441 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.47s 2025-05-31 21:01:21.497446 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 5.76s 2025-05-31 21:01:21.497455 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 5.63s 2025-05-31 21:01:21.497461 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.53s 2025-05-31 21:01:21.497466 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 5.40s 2025-05-31 21:01:21.497471 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.96s 2025-05-31 21:01:21.497476 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 4.79s 2025-05-31 21:01:21.497482 | orchestrator | loadbalancer : Wait for backup proxysql to start ------------------------ 4.73s 2025-05-31 21:01:21.497487 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 4.47s 2025-05-31 21:01:21.497492 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 4.41s 2025-05-31 21:01:21.497497 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 4.29s 2025-05-31 21:01:21.497503 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 4.26s 2025-05-31 21:01:21.497508 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.24s 2025-05-31 21:01:21.497513 | orchestrator | haproxy-config : Copying over ceph-rgw haproxy config ------------------- 4.23s 2025-05-31 21:01:21.497518 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.17s 2025-05-31 21:01:21.497524 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.09s 2025-05-31 21:01:21.497529 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 3.99s 2025-05-31 21:01:21.497534 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 3.95s 2025-05-31 21:01:21.497540 | orchestrator | 2025-05-31 21:01:21 | INFO  | Task f093e11b-da87-4aa1-914f-a87f454420d6 is in state STARTED 2025-05-31 21:01:21.497545 | orchestrator | 2025-05-31 21:01:21 | INFO  | Task a81d4a59-ff86-4ed5-9241-373b495cc025 is in state STARTED 2025-05-31 21:01:21.497551 | orchestrator | 2025-05-31 21:01:21 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 21:01:21.497556 | orchestrator | 2025-05-31 21:01:21 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:01:24.523809 | orchestrator | 2025-05-31 21:01:24 | INFO  | Task f093e11b-da87-4aa1-914f-a87f454420d6 is in state STARTED 2025-05-31 21:01:24.524840 | orchestrator | 2025-05-31 21:01:24 | INFO  | Task a81d4a59-ff86-4ed5-9241-373b495cc025 is in state STARTED 2025-05-31 21:01:24.526123 | orchestrator | 2025-05-31 21:01:24 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 21:01:24.526158 | orchestrator | 2025-05-31 21:01:24 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:01:27.564486 | orchestrator | 2025-05-31 21:01:27 | INFO  | Task f093e11b-da87-4aa1-914f-a87f454420d6 is in state STARTED 2025-05-31 21:01:27.564794 | orchestrator | 2025-05-31 21:01:27 | INFO  | Task a81d4a59-ff86-4ed5-9241-373b495cc025 is in state STARTED 2025-05-31 21:01:27.565665 | orchestrator | 2025-05-31 21:01:27 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 21:01:27.565790 | orchestrator | 2025-05-31 21:01:27 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:01:30.633111 | orchestrator | 2025-05-31 21:01:30 | INFO  | Task f093e11b-da87-4aa1-914f-a87f454420d6 is in state STARTED 2025-05-31 21:01:30.639398 | orchestrator | 2025-05-31 21:01:30 | INFO  | Task a81d4a59-ff86-4ed5-9241-373b495cc025 is in state STARTED 2025-05-31 21:01:30.641645 | orchestrator | 2025-05-31 21:01:30 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 21:01:30.641682 | orchestrator | 2025-05-31 21:01:30 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:01:33.694316 | orchestrator | 2025-05-31 21:01:33 | INFO  | Task f093e11b-da87-4aa1-914f-a87f454420d6 is in state STARTED 2025-05-31 21:01:33.699090 | orchestrator | 2025-05-31 21:01:33 | INFO  | Task a81d4a59-ff86-4ed5-9241-373b495cc025 is in state STARTED 2025-05-31 21:01:33.699134 | orchestrator | 2025-05-31 21:01:33 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 21:01:33.699146 | orchestrator | 2025-05-31 21:01:33 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:01:36.764148 | orchestrator | 2025-05-31 21:01:36 | INFO  | Task f093e11b-da87-4aa1-914f-a87f454420d6 is in state STARTED 2025-05-31 21:01:36.764251 | orchestrator | 2025-05-31 21:01:36 | INFO  | Task a81d4a59-ff86-4ed5-9241-373b495cc025 is in state STARTED 2025-05-31 21:01:36.765321 | orchestrator | 2025-05-31 21:01:36 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 21:01:36.765709 | orchestrator | 2025-05-31 21:01:36 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:01:39.803549 | orchestrator | 2025-05-31 21:01:39 | INFO  | Task f093e11b-da87-4aa1-914f-a87f454420d6 is in state STARTED 2025-05-31 21:01:39.803660 | orchestrator | 2025-05-31 21:01:39 | INFO  | Task a81d4a59-ff86-4ed5-9241-373b495cc025 is in state STARTED 2025-05-31 21:01:39.805064 | orchestrator | 2025-05-31 21:01:39 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 21:01:39.805572 | orchestrator | 2025-05-31 21:01:39 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:01:42.864045 | orchestrator | 2025-05-31 21:01:42 | INFO  | Task f093e11b-da87-4aa1-914f-a87f454420d6 is in state STARTED 2025-05-31 21:01:42.864600 | orchestrator | 2025-05-31 21:01:42 | INFO  | Task a81d4a59-ff86-4ed5-9241-373b495cc025 is in state STARTED 2025-05-31 21:01:42.865779 | orchestrator | 2025-05-31 21:01:42 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 21:01:42.866211 | orchestrator | 2025-05-31 21:01:42 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:01:45.911328 | orchestrator | 2025-05-31 21:01:45 | INFO  | Task f093e11b-da87-4aa1-914f-a87f454420d6 is in state STARTED 2025-05-31 21:01:45.911764 | orchestrator | 2025-05-31 21:01:45 | INFO  | Task a81d4a59-ff86-4ed5-9241-373b495cc025 is in state STARTED 2025-05-31 21:01:45.913784 | orchestrator | 2025-05-31 21:01:45 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 21:01:45.913916 | orchestrator | 2025-05-31 21:01:45 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:01:48.958951 | orchestrator | 2025-05-31 21:01:48 | INFO  | Task f093e11b-da87-4aa1-914f-a87f454420d6 is in state STARTED 2025-05-31 21:01:48.959062 | orchestrator | 2025-05-31 21:01:48 | INFO  | Task a81d4a59-ff86-4ed5-9241-373b495cc025 is in state STARTED 2025-05-31 21:01:48.960281 | orchestrator | 2025-05-31 21:01:48 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 21:01:48.960305 | orchestrator | 2025-05-31 21:01:48 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:01:52.018979 | orchestrator | 2025-05-31 21:01:52 | INFO  | Task f093e11b-da87-4aa1-914f-a87f454420d6 is in state STARTED 2025-05-31 21:01:52.026321 | orchestrator | 2025-05-31 21:01:52 | INFO  | Task a81d4a59-ff86-4ed5-9241-373b495cc025 is in state STARTED 2025-05-31 21:01:52.030908 | orchestrator | 2025-05-31 21:01:52 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 21:01:52.031003 | orchestrator | 2025-05-31 21:01:52 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:01:55.096041 | orchestrator | 2025-05-31 21:01:55 | INFO  | Task f093e11b-da87-4aa1-914f-a87f454420d6 is in state STARTED 2025-05-31 21:01:55.099275 | orchestrator | 2025-05-31 21:01:55 | INFO  | Task a81d4a59-ff86-4ed5-9241-373b495cc025 is in state STARTED 2025-05-31 21:01:55.104664 | orchestrator | 2025-05-31 21:01:55 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 21:01:55.106103 | orchestrator | 2025-05-31 21:01:55 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:01:58.168840 | orchestrator | 2025-05-31 21:01:58 | INFO  | Task f093e11b-da87-4aa1-914f-a87f454420d6 is in state STARTED 2025-05-31 21:01:58.170821 | orchestrator | 2025-05-31 21:01:58 | INFO  | Task a81d4a59-ff86-4ed5-9241-373b495cc025 is in state STARTED 2025-05-31 21:01:58.173106 | orchestrator | 2025-05-31 21:01:58 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 21:01:58.173219 | orchestrator | 2025-05-31 21:01:58 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:02:01.214621 | orchestrator | 2025-05-31 21:02:01 | INFO  | Task f093e11b-da87-4aa1-914f-a87f454420d6 is in state STARTED 2025-05-31 21:02:01.216052 | orchestrator | 2025-05-31 21:02:01 | INFO  | Task a81d4a59-ff86-4ed5-9241-373b495cc025 is in state STARTED 2025-05-31 21:02:01.218967 | orchestrator | 2025-05-31 21:02:01 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 21:02:01.219046 | orchestrator | 2025-05-31 21:02:01 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:02:04.250376 | orchestrator | 2025-05-31 21:02:04 | INFO  | Task f093e11b-da87-4aa1-914f-a87f454420d6 is in state STARTED 2025-05-31 21:02:04.252017 | orchestrator | 2025-05-31 21:02:04 | INFO  | Task a81d4a59-ff86-4ed5-9241-373b495cc025 is in state STARTED 2025-05-31 21:02:04.253849 | orchestrator | 2025-05-31 21:02:04 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 21:02:04.253921 | orchestrator | 2025-05-31 21:02:04 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:02:07.294179 | orchestrator | 2025-05-31 21:02:07 | INFO  | Task f093e11b-da87-4aa1-914f-a87f454420d6 is in state STARTED 2025-05-31 21:02:07.295499 | orchestrator | 2025-05-31 21:02:07 | INFO  | Task a81d4a59-ff86-4ed5-9241-373b495cc025 is in state STARTED 2025-05-31 21:02:07.297103 | orchestrator | 2025-05-31 21:02:07 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 21:02:07.297155 | orchestrator | 2025-05-31 21:02:07 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:02:10.347279 | orchestrator | 2025-05-31 21:02:10 | INFO  | Task f093e11b-da87-4aa1-914f-a87f454420d6 is in state STARTED 2025-05-31 21:02:10.349179 | orchestrator | 2025-05-31 21:02:10 | INFO  | Task a81d4a59-ff86-4ed5-9241-373b495cc025 is in state STARTED 2025-05-31 21:02:10.352740 | orchestrator | 2025-05-31 21:02:10 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 21:02:10.352783 | orchestrator | 2025-05-31 21:02:10 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:02:13.395371 | orchestrator | 2025-05-31 21:02:13 | INFO  | Task f093e11b-da87-4aa1-914f-a87f454420d6 is in state STARTED 2025-05-31 21:02:13.395481 | orchestrator | 2025-05-31 21:02:13 | INFO  | Task a81d4a59-ff86-4ed5-9241-373b495cc025 is in state STARTED 2025-05-31 21:02:13.398544 | orchestrator | 2025-05-31 21:02:13 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 21:02:13.398597 | orchestrator | 2025-05-31 21:02:13 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:02:16.447342 | orchestrator | 2025-05-31 21:02:16 | INFO  | Task f093e11b-da87-4aa1-914f-a87f454420d6 is in state STARTED 2025-05-31 21:02:16.449085 | orchestrator | 2025-05-31 21:02:16 | INFO  | Task a81d4a59-ff86-4ed5-9241-373b495cc025 is in state STARTED 2025-05-31 21:02:16.451227 | orchestrator | 2025-05-31 21:02:16 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 21:02:16.452308 | orchestrator | 2025-05-31 21:02:16 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:02:19.494976 | orchestrator | 2025-05-31 21:02:19 | INFO  | Task f093e11b-da87-4aa1-914f-a87f454420d6 is in state STARTED 2025-05-31 21:02:19.496538 | orchestrator | 2025-05-31 21:02:19 | INFO  | Task a81d4a59-ff86-4ed5-9241-373b495cc025 is in state STARTED 2025-05-31 21:02:19.497394 | orchestrator | 2025-05-31 21:02:19 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 21:02:19.497431 | orchestrator | 2025-05-31 21:02:19 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:02:22.537155 | orchestrator | 2025-05-31 21:02:22 | INFO  | Task f093e11b-da87-4aa1-914f-a87f454420d6 is in state STARTED 2025-05-31 21:02:22.539546 | orchestrator | 2025-05-31 21:02:22 | INFO  | Task a81d4a59-ff86-4ed5-9241-373b495cc025 is in state STARTED 2025-05-31 21:02:22.542008 | orchestrator | 2025-05-31 21:02:22 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 21:02:22.542102 | orchestrator | 2025-05-31 21:02:22 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:02:25.588606 | orchestrator | 2025-05-31 21:02:25 | INFO  | Task f093e11b-da87-4aa1-914f-a87f454420d6 is in state STARTED 2025-05-31 21:02:25.594554 | orchestrator | 2025-05-31 21:02:25 | INFO  | Task a81d4a59-ff86-4ed5-9241-373b495cc025 is in state STARTED 2025-05-31 21:02:25.596767 | orchestrator | 2025-05-31 21:02:25 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 21:02:25.597968 | orchestrator | 2025-05-31 21:02:25 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:02:28.646372 | orchestrator | 2025-05-31 21:02:28 | INFO  | Task f093e11b-da87-4aa1-914f-a87f454420d6 is in state STARTED 2025-05-31 21:02:28.648038 | orchestrator | 2025-05-31 21:02:28 | INFO  | Task a81d4a59-ff86-4ed5-9241-373b495cc025 is in state STARTED 2025-05-31 21:02:28.649943 | orchestrator | 2025-05-31 21:02:28 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 21:02:28.650007 | orchestrator | 2025-05-31 21:02:28 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:02:31.702518 | orchestrator | 2025-05-31 21:02:31 | INFO  | Task f093e11b-da87-4aa1-914f-a87f454420d6 is in state STARTED 2025-05-31 21:02:31.703432 | orchestrator | 2025-05-31 21:02:31 | INFO  | Task a81d4a59-ff86-4ed5-9241-373b495cc025 is in state STARTED 2025-05-31 21:02:31.705139 | orchestrator | 2025-05-31 21:02:31 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 21:02:31.705173 | orchestrator | 2025-05-31 21:02:31 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:02:34.745576 | orchestrator | 2025-05-31 21:02:34 | INFO  | Task f093e11b-da87-4aa1-914f-a87f454420d6 is in state STARTED 2025-05-31 21:02:34.747114 | orchestrator | 2025-05-31 21:02:34 | INFO  | Task a81d4a59-ff86-4ed5-9241-373b495cc025 is in state STARTED 2025-05-31 21:02:34.748732 | orchestrator | 2025-05-31 21:02:34 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 21:02:34.748969 | orchestrator | 2025-05-31 21:02:34 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:02:37.813219 | orchestrator | 2025-05-31 21:02:37 | INFO  | Task f093e11b-da87-4aa1-914f-a87f454420d6 is in state STARTED 2025-05-31 21:02:37.815495 | orchestrator | 2025-05-31 21:02:37 | INFO  | Task a81d4a59-ff86-4ed5-9241-373b495cc025 is in state STARTED 2025-05-31 21:02:37.818739 | orchestrator | 2025-05-31 21:02:37 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 21:02:37.818839 | orchestrator | 2025-05-31 21:02:37 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:02:40.874512 | orchestrator | 2025-05-31 21:02:40 | INFO  | Task f093e11b-da87-4aa1-914f-a87f454420d6 is in state STARTED 2025-05-31 21:02:40.876408 | orchestrator | 2025-05-31 21:02:40 | INFO  | Task a81d4a59-ff86-4ed5-9241-373b495cc025 is in state STARTED 2025-05-31 21:02:40.879398 | orchestrator | 2025-05-31 21:02:40 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 21:02:40.879471 | orchestrator | 2025-05-31 21:02:40 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:02:43.935373 | orchestrator | 2025-05-31 21:02:43 | INFO  | Task f093e11b-da87-4aa1-914f-a87f454420d6 is in state STARTED 2025-05-31 21:02:43.936944 | orchestrator | 2025-05-31 21:02:43 | INFO  | Task a81d4a59-ff86-4ed5-9241-373b495cc025 is in state STARTED 2025-05-31 21:02:43.938438 | orchestrator | 2025-05-31 21:02:43 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 21:02:43.939063 | orchestrator | 2025-05-31 21:02:43 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:02:46.995250 | orchestrator | 2025-05-31 21:02:46 | INFO  | Task f093e11b-da87-4aa1-914f-a87f454420d6 is in state STARTED 2025-05-31 21:02:46.996341 | orchestrator | 2025-05-31 21:02:46 | INFO  | Task a81d4a59-ff86-4ed5-9241-373b495cc025 is in state STARTED 2025-05-31 21:02:46.998251 | orchestrator | 2025-05-31 21:02:46 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 21:02:46.998314 | orchestrator | 2025-05-31 21:02:46 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:02:50.051962 | orchestrator | 2025-05-31 21:02:50 | INFO  | Task f093e11b-da87-4aa1-914f-a87f454420d6 is in state STARTED 2025-05-31 21:02:50.052095 | orchestrator | 2025-05-31 21:02:50 | INFO  | Task a81d4a59-ff86-4ed5-9241-373b495cc025 is in state STARTED 2025-05-31 21:02:50.052113 | orchestrator | 2025-05-31 21:02:50 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 21:02:50.052126 | orchestrator | 2025-05-31 21:02:50 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:02:53.097361 | orchestrator | 2025-05-31 21:02:53 | INFO  | Task f093e11b-da87-4aa1-914f-a87f454420d6 is in state STARTED 2025-05-31 21:02:53.099207 | orchestrator | 2025-05-31 21:02:53 | INFO  | Task a81d4a59-ff86-4ed5-9241-373b495cc025 is in state STARTED 2025-05-31 21:02:53.100689 | orchestrator | 2025-05-31 21:02:53 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 21:02:53.100781 | orchestrator | 2025-05-31 21:02:53 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:02:56.134705 | orchestrator | 2025-05-31 21:02:56 | INFO  | Task f093e11b-da87-4aa1-914f-a87f454420d6 is in state STARTED 2025-05-31 21:02:56.135909 | orchestrator | 2025-05-31 21:02:56 | INFO  | Task a81d4a59-ff86-4ed5-9241-373b495cc025 is in state STARTED 2025-05-31 21:02:56.137448 | orchestrator | 2025-05-31 21:02:56 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 21:02:56.137491 | orchestrator | 2025-05-31 21:02:56 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:02:59.182547 | orchestrator | 2025-05-31 21:02:59 | INFO  | Task f093e11b-da87-4aa1-914f-a87f454420d6 is in state STARTED 2025-05-31 21:02:59.187011 | orchestrator | 2025-05-31 21:02:59 | INFO  | Task a81d4a59-ff86-4ed5-9241-373b495cc025 is in state STARTED 2025-05-31 21:02:59.187164 | orchestrator | 2025-05-31 21:02:59 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 21:02:59.187404 | orchestrator | 2025-05-31 21:02:59 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:03:02.236123 | orchestrator | 2025-05-31 21:03:02 | INFO  | Task f093e11b-da87-4aa1-914f-a87f454420d6 is in state STARTED 2025-05-31 21:03:02.237305 | orchestrator | 2025-05-31 21:03:02 | INFO  | Task a81d4a59-ff86-4ed5-9241-373b495cc025 is in state STARTED 2025-05-31 21:03:02.239630 | orchestrator | 2025-05-31 21:03:02 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 21:03:02.239811 | orchestrator | 2025-05-31 21:03:02 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:03:05.287665 | orchestrator | 2025-05-31 21:03:05 | INFO  | Task f093e11b-da87-4aa1-914f-a87f454420d6 is in state STARTED 2025-05-31 21:03:05.288094 | orchestrator | 2025-05-31 21:03:05 | INFO  | Task a81d4a59-ff86-4ed5-9241-373b495cc025 is in state STARTED 2025-05-31 21:03:05.289561 | orchestrator | 2025-05-31 21:03:05 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 21:03:05.289611 | orchestrator | 2025-05-31 21:03:05 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:03:08.336322 | orchestrator | 2025-05-31 21:03:08 | INFO  | Task f093e11b-da87-4aa1-914f-a87f454420d6 is in state STARTED 2025-05-31 21:03:08.338260 | orchestrator | 2025-05-31 21:03:08 | INFO  | Task a81d4a59-ff86-4ed5-9241-373b495cc025 is in state STARTED 2025-05-31 21:03:08.339662 | orchestrator | 2025-05-31 21:03:08 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 21:03:08.339720 | orchestrator | 2025-05-31 21:03:08 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:03:11.385773 | orchestrator | 2025-05-31 21:03:11 | INFO  | Task f093e11b-da87-4aa1-914f-a87f454420d6 is in state STARTED 2025-05-31 21:03:11.386701 | orchestrator | 2025-05-31 21:03:11 | INFO  | Task a81d4a59-ff86-4ed5-9241-373b495cc025 is in state STARTED 2025-05-31 21:03:11.387962 | orchestrator | 2025-05-31 21:03:11 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 21:03:11.387991 | orchestrator | 2025-05-31 21:03:11 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:03:14.440003 | orchestrator | 2025-05-31 21:03:14 | INFO  | Task f093e11b-da87-4aa1-914f-a87f454420d6 is in state STARTED 2025-05-31 21:03:14.442327 | orchestrator | 2025-05-31 21:03:14 | INFO  | Task a81d4a59-ff86-4ed5-9241-373b495cc025 is in state STARTED 2025-05-31 21:03:14.444437 | orchestrator | 2025-05-31 21:03:14 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 21:03:14.444493 | orchestrator | 2025-05-31 21:03:14 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:03:17.482996 | orchestrator | 2025-05-31 21:03:17 | INFO  | Task f093e11b-da87-4aa1-914f-a87f454420d6 is in state STARTED 2025-05-31 21:03:17.484465 | orchestrator | 2025-05-31 21:03:17 | INFO  | Task a81d4a59-ff86-4ed5-9241-373b495cc025 is in state STARTED 2025-05-31 21:03:17.486653 | orchestrator | 2025-05-31 21:03:17 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 21:03:17.486709 | orchestrator | 2025-05-31 21:03:17 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:03:20.535357 | orchestrator | 2025-05-31 21:03:20 | INFO  | Task f093e11b-da87-4aa1-914f-a87f454420d6 is in state STARTED 2025-05-31 21:03:20.535810 | orchestrator | 2025-05-31 21:03:20 | INFO  | Task a81d4a59-ff86-4ed5-9241-373b495cc025 is in state STARTED 2025-05-31 21:03:20.536325 | orchestrator | 2025-05-31 21:03:20 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 21:03:20.537622 | orchestrator | 2025-05-31 21:03:20 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:03:23.581568 | orchestrator | 2025-05-31 21:03:23 | INFO  | Task f093e11b-da87-4aa1-914f-a87f454420d6 is in state STARTED 2025-05-31 21:03:23.583718 | orchestrator | 2025-05-31 21:03:23 | INFO  | Task a81d4a59-ff86-4ed5-9241-373b495cc025 is in state STARTED 2025-05-31 21:03:23.586792 | orchestrator | 2025-05-31 21:03:23 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state STARTED 2025-05-31 21:03:23.586850 | orchestrator | 2025-05-31 21:03:23 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:03:26.632140 | orchestrator | 2025-05-31 21:03:26 | INFO  | Task f093e11b-da87-4aa1-914f-a87f454420d6 is in state STARTED 2025-05-31 21:03:26.633663 | orchestrator | 2025-05-31 21:03:26 | INFO  | Task e346174a-3b83-4776-b276-e2a25f5ab226 is in state STARTED 2025-05-31 21:03:26.635632 | orchestrator | 2025-05-31 21:03:26 | INFO  | Task a81d4a59-ff86-4ed5-9241-373b495cc025 is in state STARTED 2025-05-31 21:03:26.643190 | orchestrator | 2025-05-31 21:03:26 | INFO  | Task a0bdcbf6-9613-4f1a-8dfe-6210c4f9a59a is in state SUCCESS 2025-05-31 21:03:26.645105 | orchestrator | 2025-05-31 21:03:26.645142 | orchestrator | 2025-05-31 21:03:26.645154 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-05-31 21:03:26.645167 | orchestrator | 2025-05-31 21:03:26.645178 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-05-31 21:03:26.645190 | orchestrator | Saturday 31 May 2025 20:52:22 +0000 (0:00:00.611) 0:00:00.611 ********** 2025-05-31 21:03:26.645202 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 21:03:26.645215 | orchestrator | 2025-05-31 21:03:26.645226 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-05-31 21:03:26.645237 | orchestrator | Saturday 31 May 2025 20:52:23 +0000 (0:00:01.145) 0:00:01.756 ********** 2025-05-31 21:03:26.645248 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:03:26.645260 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:03:26.645271 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:03:26.645281 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:03:26.645292 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:03:26.645302 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:03:26.645313 | orchestrator | 2025-05-31 21:03:26.645324 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-05-31 21:03:26.645413 | orchestrator | Saturday 31 May 2025 20:52:24 +0000 (0:00:01.635) 0:00:03.392 ********** 2025-05-31 21:03:26.645428 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:03:26.645439 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:03:26.645450 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:03:26.645461 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:03:26.645472 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:03:26.645483 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:03:26.645494 | orchestrator | 2025-05-31 21:03:26.645505 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-05-31 21:03:26.645516 | orchestrator | Saturday 31 May 2025 20:52:25 +0000 (0:00:00.807) 0:00:04.199 ********** 2025-05-31 21:03:26.645527 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:03:26.645538 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:03:26.645549 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:03:26.645560 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:03:26.645570 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:03:26.645581 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:03:26.645592 | orchestrator | 2025-05-31 21:03:26.645603 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-05-31 21:03:26.645614 | orchestrator | Saturday 31 May 2025 20:52:26 +0000 (0:00:00.932) 0:00:05.132 ********** 2025-05-31 21:03:26.645625 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:03:26.646123 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:03:26.646168 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:03:26.646180 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:03:26.646191 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:03:26.646202 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:03:26.646212 | orchestrator | 2025-05-31 21:03:26.646224 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-05-31 21:03:26.646235 | orchestrator | Saturday 31 May 2025 20:52:27 +0000 (0:00:00.706) 0:00:05.838 ********** 2025-05-31 21:03:26.646245 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:03:26.646256 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:03:26.646267 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:03:26.646277 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:03:26.646287 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:03:26.646298 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:03:26.646309 | orchestrator | 2025-05-31 21:03:26.646319 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-05-31 21:03:26.646330 | orchestrator | Saturday 31 May 2025 20:52:28 +0000 (0:00:00.640) 0:00:06.478 ********** 2025-05-31 21:03:26.646342 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:03:26.646367 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:03:26.646378 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:03:26.646389 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:03:26.646399 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:03:26.646410 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:03:26.646421 | orchestrator | 2025-05-31 21:03:26.646431 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-05-31 21:03:26.646442 | orchestrator | Saturday 31 May 2025 20:52:28 +0000 (0:00:00.900) 0:00:07.378 ********** 2025-05-31 21:03:26.646453 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.647121 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.647139 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.647150 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.647161 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.647172 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.647182 | orchestrator | 2025-05-31 21:03:26.647194 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-05-31 21:03:26.647206 | orchestrator | Saturday 31 May 2025 20:52:29 +0000 (0:00:00.826) 0:00:08.204 ********** 2025-05-31 21:03:26.647216 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:03:26.647227 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:03:26.647238 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:03:26.647249 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:03:26.647260 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:03:26.647271 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:03:26.647282 | orchestrator | 2025-05-31 21:03:26.647293 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-05-31 21:03:26.647304 | orchestrator | Saturday 31 May 2025 20:52:30 +0000 (0:00:00.900) 0:00:09.104 ********** 2025-05-31 21:03:26.647315 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-31 21:03:26.647327 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-31 21:03:26.647385 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-31 21:03:26.647487 | orchestrator | 2025-05-31 21:03:26.647724 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-05-31 21:03:26.647937 | orchestrator | Saturday 31 May 2025 20:52:31 +0000 (0:00:00.889) 0:00:09.994 ********** 2025-05-31 21:03:26.647953 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:03:26.647965 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:03:26.647976 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:03:26.647987 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:03:26.647998 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:03:26.648010 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:03:26.648021 | orchestrator | 2025-05-31 21:03:26.648066 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-05-31 21:03:26.648093 | orchestrator | Saturday 31 May 2025 20:52:32 +0000 (0:00:01.352) 0:00:11.346 ********** 2025-05-31 21:03:26.648104 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-31 21:03:26.648115 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-31 21:03:26.648906 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-31 21:03:26.648920 | orchestrator | 2025-05-31 21:03:26.648932 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-05-31 21:03:26.648943 | orchestrator | Saturday 31 May 2025 20:52:36 +0000 (0:00:03.228) 0:00:14.574 ********** 2025-05-31 21:03:26.648954 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-31 21:03:26.648965 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-31 21:03:26.648976 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-31 21:03:26.648987 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.648998 | orchestrator | 2025-05-31 21:03:26.649009 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-05-31 21:03:26.649020 | orchestrator | Saturday 31 May 2025 20:52:37 +0000 (0:00:01.020) 0:00:15.594 ********** 2025-05-31 21:03:26.649033 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-05-31 21:03:26.649302 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-05-31 21:03:26.649320 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-05-31 21:03:26.649377 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.649389 | orchestrator | 2025-05-31 21:03:26.649400 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-05-31 21:03:26.649411 | orchestrator | Saturday 31 May 2025 20:52:38 +0000 (0:00:01.422) 0:00:17.017 ********** 2025-05-31 21:03:26.649424 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-31 21:03:26.649448 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-31 21:03:26.649460 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-31 21:03:26.649509 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.649521 | orchestrator | 2025-05-31 21:03:26.649532 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-05-31 21:03:26.649543 | orchestrator | Saturday 31 May 2025 20:52:39 +0000 (0:00:00.594) 0:00:17.611 ********** 2025-05-31 21:03:26.649569 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-05-31 20:52:33.601144', 'end': '2025-05-31 20:52:33.887945', 'delta': '0:00:00.286801', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-05-31 21:03:26.649595 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-05-31 20:52:34.744795', 'end': '2025-05-31 20:52:35.002545', 'delta': '0:00:00.257750', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-05-31 21:03:26.649607 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-05-31 20:52:35.655398', 'end': '2025-05-31 20:52:35.908287', 'delta': '0:00:00.252889', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-05-31 21:03:26.649618 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.649629 | orchestrator | 2025-05-31 21:03:26.649640 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-05-31 21:03:26.649651 | orchestrator | Saturday 31 May 2025 20:52:39 +0000 (0:00:00.203) 0:00:17.815 ********** 2025-05-31 21:03:26.649661 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:03:26.649672 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:03:26.649683 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:03:26.649693 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:03:26.649704 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:03:26.649714 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:03:26.649725 | orchestrator | 2025-05-31 21:03:26.649735 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-05-31 21:03:26.649747 | orchestrator | Saturday 31 May 2025 20:52:40 +0000 (0:00:01.473) 0:00:19.289 ********** 2025-05-31 21:03:26.649758 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:03:26.649768 | orchestrator | 2025-05-31 21:03:26.649779 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-05-31 21:03:26.649790 | orchestrator | Saturday 31 May 2025 20:52:41 +0000 (0:00:00.858) 0:00:20.147 ********** 2025-05-31 21:03:26.649800 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.649811 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.649822 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.649832 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.649843 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.649886 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.649908 | orchestrator | 2025-05-31 21:03:26.649928 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-05-31 21:03:26.649958 | orchestrator | Saturday 31 May 2025 20:52:43 +0000 (0:00:01.465) 0:00:21.612 ********** 2025-05-31 21:03:26.649975 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.649988 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.650000 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.650012 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.650071 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.650083 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.650096 | orchestrator | 2025-05-31 21:03:26.650108 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-05-31 21:03:26.650121 | orchestrator | Saturday 31 May 2025 20:52:44 +0000 (0:00:01.501) 0:00:23.114 ********** 2025-05-31 21:03:26.650133 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.650145 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.650157 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.650170 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.650182 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.650194 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.650207 | orchestrator | 2025-05-31 21:03:26.650220 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-05-31 21:03:26.650232 | orchestrator | Saturday 31 May 2025 20:52:45 +0000 (0:00:01.160) 0:00:24.274 ********** 2025-05-31 21:03:26.650244 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.650256 | orchestrator | 2025-05-31 21:03:26.650269 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-05-31 21:03:26.650281 | orchestrator | Saturday 31 May 2025 20:52:45 +0000 (0:00:00.157) 0:00:24.431 ********** 2025-05-31 21:03:26.650294 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.650305 | orchestrator | 2025-05-31 21:03:26.650316 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-05-31 21:03:26.650326 | orchestrator | Saturday 31 May 2025 20:52:46 +0000 (0:00:00.329) 0:00:24.761 ********** 2025-05-31 21:03:26.650337 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.650348 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.650358 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.650369 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.650379 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.650390 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.650401 | orchestrator | 2025-05-31 21:03:26.650412 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-05-31 21:03:26.650441 | orchestrator | Saturday 31 May 2025 20:52:47 +0000 (0:00:00.969) 0:00:25.730 ********** 2025-05-31 21:03:26.650452 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.650463 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.650474 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.650484 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.650495 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.650506 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.650517 | orchestrator | 2025-05-31 21:03:26.650527 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-05-31 21:03:26.650538 | orchestrator | Saturday 31 May 2025 20:52:48 +0000 (0:00:01.155) 0:00:26.885 ********** 2025-05-31 21:03:26.650549 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.650559 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.650570 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.650580 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.650591 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.650601 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.650612 | orchestrator | 2025-05-31 21:03:26.650623 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-05-31 21:03:26.650633 | orchestrator | Saturday 31 May 2025 20:52:48 +0000 (0:00:00.501) 0:00:27.386 ********** 2025-05-31 21:03:26.650644 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.650655 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.650672 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.650683 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.650693 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.650704 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.650715 | orchestrator | 2025-05-31 21:03:26.650725 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-05-31 21:03:26.650736 | orchestrator | Saturday 31 May 2025 20:52:49 +0000 (0:00:00.558) 0:00:27.945 ********** 2025-05-31 21:03:26.650747 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.650757 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.650768 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.650779 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.650789 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.650800 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.650811 | orchestrator | 2025-05-31 21:03:26.650821 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-05-31 21:03:26.650832 | orchestrator | Saturday 31 May 2025 20:52:49 +0000 (0:00:00.443) 0:00:28.388 ********** 2025-05-31 21:03:26.650843 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.650854 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.650968 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.650980 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.650990 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.651001 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.651012 | orchestrator | 2025-05-31 21:03:26.651022 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-05-31 21:03:26.651033 | orchestrator | Saturday 31 May 2025 20:52:50 +0000 (0:00:00.703) 0:00:29.092 ********** 2025-05-31 21:03:26.651044 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.651055 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.651065 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.651076 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.651086 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.651097 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.651107 | orchestrator | 2025-05-31 21:03:26.651118 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-05-31 21:03:26.651129 | orchestrator | Saturday 31 May 2025 20:52:51 +0000 (0:00:00.511) 0:00:29.603 ********** 2025-05-31 21:03:26.651148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 21:03:26.651160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 21:03:26.651172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 21:03:26.651183 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 21:03:26.651214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 21:03:26.651226 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 21:03:26.651237 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 21:03:26.651248 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 21:03:26.651268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_82edc559-ec05-4620-85e0-00512a69f475', 'scsi-SQEMU_QEMU_HARDDISK_82edc559-ec05-4620-85e0-00512a69f475'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_82edc559-ec05-4620-85e0-00512a69f475-part1', 'scsi-SQEMU_QEMU_HARDDISK_82edc559-ec05-4620-85e0-00512a69f475-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_82edc559-ec05-4620-85e0-00512a69f475-part14', 'scsi-SQEMU_QEMU_HARDDISK_82edc559-ec05-4620-85e0-00512a69f475-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_82edc559-ec05-4620-85e0-00512a69f475-part15', 'scsi-SQEMU_QEMU_HARDDISK_82edc559-ec05-4620-85e0-00512a69f475-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_82edc559-ec05-4620-85e0-00512a69f475-part16', 'scsi-SQEMU_QEMU_HARDDISK_82edc559-ec05-4620-85e0-00512a69f475-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-31 21:03:26.651296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 21:03:26.651309 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-31-19-16-05-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-31 21:03:26.651322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 21:03:26.651334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 21:03:26.651344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 21:03:26.651356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 21:03:26.651371 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 21:03:26.651383 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 21:03:26.651394 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 21:03:26.651428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6c9cfe99-95af-4c67-bff1-48d0dfa5ccfe', 'scsi-SQEMU_QEMU_HARDDISK_6c9cfe99-95af-4c67-bff1-48d0dfa5ccfe'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6c9cfe99-95af-4c67-bff1-48d0dfa5ccfe-part1', 'scsi-SQEMU_QEMU_HARDDISK_6c9cfe99-95af-4c67-bff1-48d0dfa5ccfe-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6c9cfe99-95af-4c67-bff1-48d0dfa5ccfe-part14', 'scsi-SQEMU_QEMU_HARDDISK_6c9cfe99-95af-4c67-bff1-48d0dfa5ccfe-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6c9cfe99-95af-4c67-bff1-48d0dfa5ccfe-part15', 'scsi-SQEMU_QEMU_HARDDISK_6c9cfe99-95af-4c67-bff1-48d0dfa5ccfe-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6c9cfe99-95af-4c67-bff1-48d0dfa5ccfe-part16', 'scsi-SQEMU_QEMU_HARDDISK_6c9cfe99-95af-4c67-bff1-48d0dfa5ccfe-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-31 21:03:26.651441 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-31-19-16-00-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-31 21:03:26.651453 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.651469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 21:03:26.651480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 21:03:26.651492 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 21:03:26.651509 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 21:03:26.651528 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 21:03:26.651540 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 21:03:26.651551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 21:03:26.651562 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 21:03:26.651579 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_00ca6f0b-c95a-490d-9c88-84cc0dbef80d', 'scsi-SQEMU_QEMU_HARDDISK_00ca6f0b-c95a-490d-9c88-84cc0dbef80d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_00ca6f0b-c95a-490d-9c88-84cc0dbef80d-part1', 'scsi-SQEMU_QEMU_HARDDISK_00ca6f0b-c95a-490d-9c88-84cc0dbef80d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_00ca6f0b-c95a-490d-9c88-84cc0dbef80d-part14', 'scsi-SQEMU_QEMU_HARDDISK_00ca6f0b-c95a-490d-9c88-84cc0dbef80d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_00ca6f0b-c95a-490d-9c88-84cc0dbef80d-part15', 'scsi-SQEMU_QEMU_HARDDISK_00ca6f0b-c95a-490d-9c88-84cc0dbef80d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_00ca6f0b-c95a-490d-9c88-84cc0dbef80d-part16', 'scsi-SQEMU_QEMU_HARDDISK_00ca6f0b-c95a-490d-9c88-84cc0dbef80d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-31 21:03:26.651603 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-31-19-16-07-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-31 21:03:26.651615 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.651627 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--813d0644--8ada--5e52--b3d8--7484365c4567-osd--block--813d0644--8ada--5e52--b3d8--7484365c4567', 'dm-uuid-LVM-5bKczW7C1VtLl6vPfKyu54CNx9UycXMebUF1ZziT0uwTCM1IDLBKWOEOgMMUJHXU'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-31 21:03:26.651638 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.651649 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b37e5891--99ec--5ce8--8fa7--674876c21edd-osd--block--b37e5891--99ec--5ce8--8fa7--674876c21edd', 'dm-uuid-LVM-xonbQWC1M8CKH5CqnYuw0xh7m1sgK3W0tCmLhupcrZbovffqTpDunDtXxV6VUE2K'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-31 21:03:26.651661 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 21:03:26.651672 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 21:03:26.651688 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 21:03:26.651699 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 21:03:26.651717 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 21:03:26.651734 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7717ad38--094f--5aa6--8c39--f28029f817d5-osd--block--7717ad38--094f--5aa6--8c39--f28029f817d5', 'dm-uuid-LVM-rqk0PFqpYlxzpDIf4x9vdQLuz8Lss3aL12rFSpi6N5KHRdKqji4pQySOOFHq07NU'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-31 21:03:26.651747 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 21:03:26.651758 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6fa9e552--f12f--547e--b45f--d034b93383af-osd--block--6fa9e552--f12f--547e--b45f--d034b93383af', 'dm-uuid-LVM-VNe4guLBo3JKak4Y0eQw8GQ34xS5HfNgeX5kCgBNXSepANCzMeTln6kCKPHEyQOa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-31 21:03:26.651769 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 21:03:26.651780 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 21:03:26.651791 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 21:03:26.651807 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 21:03:26.651824 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 21:03:26.651835 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 21:03:26.651846 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 21:03:26.651893 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 21:03:26.651906 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 21:03:26.651918 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--edfa5e9a--3f1a--54c1--83f4--345bb781a14b-osd--block--edfa5e9a--3f1a--54c1--83f4--345bb781a14b', 'dm-uuid-LVM-SorhS3YnnzfqLsHFgec6B7zbheRJ3TQle3cHcPsL0QlTUfAhaqCIQE3oS8Ac1I4s'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-31 21:03:26.651929 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 21:03:26.651947 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8bbc83dc-8efd-4101-bb1b-0d6e4523fdf4', 'scsi-SQEMU_QEMU_HARDDISK_8bbc83dc-8efd-4101-bb1b-0d6e4523fdf4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8bbc83dc-8efd-4101-bb1b-0d6e4523fdf4-part1', 'scsi-SQEMU_QEMU_HARDDISK_8bbc83dc-8efd-4101-bb1b-0d6e4523fdf4-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8bbc83dc-8efd-4101-bb1b-0d6e4523fdf4-part14', 'scsi-SQEMU_QEMU_HARDDISK_8bbc83dc-8efd-4101-bb1b-0d6e4523fdf4-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8bbc83dc-8efd-4101-bb1b-0d6e4523fdf4-part15', 'scsi-SQEMU_QEMU_HARDDISK_8bbc83dc-8efd-4101-bb1b-0d6e4523fdf4-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8bbc83dc-8efd-4101-bb1b-0d6e4523fdf4-part16', 'scsi-SQEMU_QEMU_HARDDISK_8bbc83dc-8efd-4101-bb1b-0d6e4523fdf4-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-31 21:03:26.651972 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a23536e0--7351--5f09--a3c0--98b1bc7f8fff-osd--block--a23536e0--7351--5f09--a3c0--98b1bc7f8fff', 'dm-uuid-LVM-7sjYFXr122RUfTn8ayUVUcjwjrsm5zStAZfHFxIaU6C0z0vjASgZwS5CY2oeReQU'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-31 21:03:26.651990 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_525ad027-7e06-4e85-bfbd-c3ec419229c5', 'scsi-SQEMU_QEMU_HARDDISK_525ad027-7e06-4e85-bfbd-c3ec419229c5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_525ad027-7e06-4e85-bfbd-c3ec419229c5-part1', 'scsi-SQEMU_QEMU_HARDDISK_525ad027-7e06-4e85-bfbd-c3ec419229c5-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_525ad027-7e06-4e85-bfbd-c3ec419229c5-part14', 'scsi-SQEMU_QEMU_HARDDISK_525ad027-7e06-4e85-bfbd-c3ec419229c5-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_525ad027-7e06-4e85-bfbd-c3ec419229c5-part15', 'scsi-SQEMU_QEMU_HARDDISK_525ad027-7e06-4e85-bfbd-c3ec419229c5-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_525ad027-7e06-4e85-bfbd-c3ec419229c5-part16', 'scsi-SQEMU_QEMU_HARDDISK_525ad027-7e06-4e85-bfbd-c3ec419229c5-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-31 21:03:26.652009 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 21:03:26.652021 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 21:03:26.652038 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--7717ad38--094f--5aa6--8c39--f28029f817d5-osd--block--7717ad38--094f--5aa6--8c39--f28029f817d5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-RvL6nI-efKD-n08O-aERP-reg2-htTb-PCRtWf', 'scsi-0QEMU_QEMU_HARDDISK_a9241271-625e-4229-94b1-3d99bba363ae', 'scsi-SQEMU_QEMU_HARDDISK_a9241271-625e-4229-94b1-3d99bba363ae'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-31 21:03:26.652051 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 21:03:26.652062 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--6fa9e552--f12f--547e--b45f--d034b93383af-osd--block--6fa9e552--f12f--547e--b45f--d034b93383af'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-zzTtsS-E409-dm9d-MlZ3-FUhB-HMBq-AagOCA', 'scsi-0QEMU_QEMU_HARDDISK_1a9ee9a4-914c-40fd-b835-c38474fb60e8', 'scsi-SQEMU_QEMU_HARDDISK_1a9ee9a4-914c-40fd-b835-c38474fb60e8'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-31 21:03:26.652075 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--813d0644--8ada--5e52--b3d8--7484365c4567-osd--block--813d0644--8ada--5e52--b3d8--7484365c4567'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-GFSk2q-BuGm-dqSF-bLh2-DAxl-hoCw-hQLzSv', 'scsi-0QEMU_QEMU_HARDDISK_191d8892-ecee-415a-8f71-2d93b7558573', 'scsi-SQEMU_QEMU_HARDDISK_191d8892-ecee-415a-8f71-2d93b7558573'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-31 21:03:26.652091 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 21:03:26.652109 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9b14a296-0b0f-456e-ac69-f453c0a27a39', 'scsi-SQEMU_QEMU_HARDDISK_9b14a296-0b0f-456e-ac69-f453c0a27a39'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-31 21:03:26.652121 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 21:03:26.652132 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 21:03:26.652151 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--b37e5891--99ec--5ce8--8fa7--674876c21edd-osd--block--b37e5891--99ec--5ce8--8fa7--674876c21edd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-729xGU-lsbd-9XIq-kmAp-1FWn-1Oxj-b66mfM', 'scsi-0QEMU_QEMU_HARDDISK_fb66f732-34d2-45e3-b1b8-d9ba2a3ac758', 'scsi-SQEMU_QEMU_HARDDISK_fb66f732-34d2-45e3-b1b8-d9ba2a3ac758'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-31 21:03:26.652163 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 21:03:26.652174 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 21:03:26.652186 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-31-19-16-02-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-31 21:03:26.652211 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5a7b16a5-b25a-49dc-b8e1-bfe6cbb00610', 'scsi-SQEMU_QEMU_HARDDISK_5a7b16a5-b25a-49dc-b8e1-bfe6cbb00610'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-31 21:03:26.652232 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fbeeb0f2-d22c-4c2b-ae21-9c150f637ac0', 'scsi-SQEMU_QEMU_HARDDISK_fbeeb0f2-d22c-4c2b-ae21-9c150f637ac0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fbeeb0f2-d22c-4c2b-ae21-9c150f637ac0-part1', 'scsi-SQEMU_QEMU_HARDDISK_fbeeb0f2-d22c-4c2b-ae21-9c150f637ac0-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fbeeb0f2-d22c-4c2b-ae21-9c150f637ac0-part14', 'scsi-SQEMU_QEMU_HARDDISK_fbeeb0f2-d22c-4c2b-ae21-9c150f637ac0-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fbeeb0f2-d22c-4c2b-ae21-9c150f637ac0-part15', 'scsi-SQEMU_QEMU_HARDDISK_fbeeb0f2-d22c-4c2b-ae21-9c150f637ac0-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fbeeb0f2-d22c-4c2b-ae21-9c150f637ac0-part16', 'scsi-SQEMU_QEMU_HARDDISK_fbeeb0f2-d22c-4c2b-ae21-9c150f637ac0-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-31 21:03:26.652245 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.652256 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-31-19-16-09-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-31 21:03:26.652267 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--edfa5e9a--3f1a--54c1--83f4--345bb781a14b-osd--block--edfa5e9a--3f1a--54c1--83f4--345bb781a14b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-2ewEst-EnqW-BGCE-wqOa-iD2n-MpCt-rUsJeG', 'scsi-0QEMU_QEMU_HARDDISK_6d52f885-97ca-45c7-bd6a-7862e27ed465', 'scsi-SQEMU_QEMU_HARDDISK_6d52f885-97ca-45c7-bd6a-7862e27ed465'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-31 21:03:26.652289 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--a23536e0--7351--5f09--a3c0--98b1bc7f8fff-osd--block--a23536e0--7351--5f09--a3c0--98b1bc7f8fff'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-vFimBm-nggg-lxYO-Mu1y-5ASg-Qvo5-TxbLvb', 'scsi-0QEMU_QEMU_HARDDISK_727d26bd-0ead-422c-920c-32fac6429b39', 'scsi-SQEMU_QEMU_HARDDISK_727d26bd-0ead-422c-920c-32fac6429b39'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-31 21:03:26.652301 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.652312 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4f6392d-f8e1-4809-8c10-779f08f2c642', 'scsi-SQEMU_QEMU_HARDDISK_d4f6392d-f8e1-4809-8c10-779f08f2c642'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-31 21:03:26.652323 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-31-19-16-13-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-31 21:03:26.652340 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.652352 | orchestrator | 2025-05-31 21:03:26.652363 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-05-31 21:03:26.652374 | orchestrator | Saturday 31 May 2025 20:52:52 +0000 (0:00:01.572) 0:00:31.176 ********** 2025-05-31 21:03:26.652386 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:03:26.652398 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:03:26.652409 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:03:26.652431 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:03:26.652443 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:03:26.652454 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:03:26.652472 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:03:26.652484 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:03:26.652495 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:03:26.652519 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_82edc559-ec05-4620-85e0-00512a69f475', 'scsi-SQEMU_QEMU_HARDDISK_82edc559-ec05-4620-85e0-00512a69f475'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_82edc559-ec05-4620-85e0-00512a69f475-part1', 'scsi-SQEMU_QEMU_HARDDISK_82edc559-ec05-4620-85e0-00512a69f475-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_82edc559-ec05-4620-85e0-00512a69f475-part14', 'scsi-SQEMU_QEMU_HARDDISK_82edc559-ec05-4620-85e0-00512a69f475-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_82edc559-ec05-4620-85e0-00512a69f475-part15', 'scsi-SQEMU_QEMU_HARDDISK_82edc559-ec05-4620-85e0-00512a69f475-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_82edc559-ec05-4620-85e0-00512a69f475-part16', 'scsi-SQEMU_QEMU_HARDDISK_82edc559-ec05-4620-85e0-00512a69f475-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:03:26.652538 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:03:26.652550 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-31-19-16-05-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:03:26.652568 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:03:26.652584 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:03:26.652596 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:03:26.652608 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:03:26.652626 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:03:26.652638 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:03:26.652649 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.652667 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6c9cfe99-95af-4c67-bff1-48d0dfa5ccfe', 'scsi-SQEMU_QEMU_HARDDISK_6c9cfe99-95af-4c67-bff1-48d0dfa5ccfe'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6c9cfe99-95af-4c67-bff1-48d0dfa5ccfe-part1', 'scsi-SQEMU_QEMU_HARDDISK_6c9cfe99-95af-4c67-bff1-48d0dfa5ccfe-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6c9cfe99-95af-4c67-bff1-48d0dfa5ccfe-part14', 'scsi-SQEMU_QEMU_HARDDISK_6c9cfe99-95af-4c67-bff1-48d0dfa5ccfe-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6c9cfe99-95af-4c67-bff1-48d0dfa5ccfe-part15', 'scsi-SQEMU_QEMU_HARDDISK_6c9cfe99-95af-4c67-bff1-48d0dfa5ccfe-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_6c9cfe99-95af-4c67-bff1-48d0dfa5ccfe-part16', 'scsi-SQEMU_QEMU_HARDDISK_6c9cfe99-95af-4c67-bff1-48d0dfa5ccfe-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:03:26.652686 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-31-19-16-00-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:03:26.652972 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:03:26.652995 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:03:26.653014 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:03:26.653024 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:03:26.653041 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:03:26.653051 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:03:26.653123 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:03:26.653137 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:03:26.653166 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_00ca6f0b-c95a-490d-9c88-84cc0dbef80d', 'scsi-SQEMU_QEMU_HARDDISK_00ca6f0b-c95a-490d-9c88-84cc0dbef80d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_00ca6f0b-c95a-490d-9c88-84cc0dbef80d-part1', 'scsi-SQEMU_QEMU_HARDDISK_00ca6f0b-c95a-490d-9c88-84cc0dbef80d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_00ca6f0b-c95a-490d-9c88-84cc0dbef80d-part14', 'scsi-SQEMU_QEMU_HARDDISK_00ca6f0b-c95a-490d-9c88-84cc0dbef80d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_00ca6f0b-c95a-490d-9c88-84cc0dbef80d-part15', 'scsi-SQEMU_QEMU_HARDDISK_00ca6f0b-c95a-490d-9c88-84cc0dbef80d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_00ca6f0b-c95a-490d-9c88-84cc0dbef80d-part16', 'scsi-SQEMU_QEMU_HARDDISK_00ca6f0b-c95a-490d-9c88-84cc0dbef80d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:03:26.653177 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-31-19-16-07-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:03:26.653187 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.653256 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--813d0644--8ada--5e52--b3d8--7484365c4567-osd--block--813d0644--8ada--5e52--b3d8--7484365c4567', 'dm-uuid-LVM-5bKczW7C1VtLl6vPfKyu54CNx9UycXMebUF1ZziT0uwTCM1IDLBKWOEOgMMUJHXU'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:03:26.653271 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b37e5891--99ec--5ce8--8fa7--674876c21edd-osd--block--b37e5891--99ec--5ce8--8fa7--674876c21edd', 'dm-uuid-LVM-xonbQWC1M8CKH5CqnYuw0xh7m1sgK3W0tCmLhupcrZbovffqTpDunDtXxV6VUE2K'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:03:26.653289 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:03:26.653304 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:03:26.653314 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:03:26.653324 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:03:26.653420 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:03:26.653437 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:03:26.653455 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:03:26.653466 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:03:26.653476 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.653575 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8bbc83dc-8efd-4101-bb1b-0d6e4523fdf4', 'scsi-SQEMU_QEMU_HARDDISK_8bbc83dc-8efd-4101-bb1b-0d6e4523fdf4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8bbc83dc-8efd-4101-bb1b-0d6e4523fdf4-part1', 'scsi-SQEMU_QEMU_HARDDISK_8bbc83dc-8efd-4101-bb1b-0d6e4523fdf4-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8bbc83dc-8efd-4101-bb1b-0d6e4523fdf4-part14', 'scsi-SQEMU_QEMU_HARDDISK_8bbc83dc-8efd-4101-bb1b-0d6e4523fdf4-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8bbc83dc-8efd-4101-bb1b-0d6e4523fdf4-part15', 'scsi-SQEMU_QEMU_HARDDISK_8bbc83dc-8efd-4101-bb1b-0d6e4523fdf4-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8bbc83dc-8efd-4101-bb1b-0d6e4523fdf4-part16', 'scsi-SQEMU_QEMU_HARDDISK_8bbc83dc-8efd-4101-bb1b-0d6e4523fdf4-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:03:26.653634 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--813d0644--8ada--5e52--b3d8--7484365c4567-osd--block--813d0644--8ada--5e52--b3d8--7484365c4567'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-GFSk2q-BuGm-dqSF-bLh2-DAxl-hoCw-hQLzSv', 'scsi-0QEMU_QEMU_HARDDISK_191d8892-ecee-415a-8f71-2d93b7558573', 'scsi-SQEMU_QEMU_HARDDISK_191d8892-ecee-415a-8f71-2d93b7558573'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:03:26.653652 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--b37e5891--99ec--5ce8--8fa7--674876c21edd-osd--block--b37e5891--99ec--5ce8--8fa7--674876c21edd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-729xGU-lsbd-9XIq-kmAp-1FWn-1Oxj-b66mfM', 'scsi-0QEMU_QEMU_HARDDISK_fb66f732-34d2-45e3-b1b8-d9ba2a3ac758', 'scsi-SQEMU_QEMU_HARDDISK_fb66f732-34d2-45e3-b1b8-d9ba2a3ac758'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:03:26.653672 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5a7b16a5-b25a-49dc-b8e1-bfe6cbb00610', 'scsi-SQEMU_QEMU_HARDDISK_5a7b16a5-b25a-49dc-b8e1-bfe6cbb00610'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:03:26.653687 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-31-19-16-09-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:03:26.653799 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7717ad38--094f--5aa6--8c39--f28029f817d5-osd--block--7717ad38--094f--5aa6--8c39--f28029f817d5', 'dm-uuid-LVM-rqk0PFqpYlxzpDIf4x9vdQLuz8Lss3aL12rFSpi6N5KHRdKqji4pQySOOFHq07NU'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:03:26.653836 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6fa9e552--f12f--547e--b45f--d034b93383af-osd--block--6fa9e552--f12f--547e--b45f--d034b93383af', 'dm-uuid-LVM-VNe4guLBo3JKak4Y0eQw8GQ34xS5HfNgeX5kCgBNXSepANCzMeTln6kCKPHEyQOa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:03:26.653854 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:03:26.653904 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:03:26.653920 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:03:26.653937 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:03:26.654088 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:03:26.654118 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:03:26.654129 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:03:26.654139 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:03:26.654248 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_525ad027-7e06-4e85-bfbd-c3ec419229c5', 'scsi-SQEMU_QEMU_HARDDISK_525ad027-7e06-4e85-bfbd-c3ec419229c5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_525ad027-7e06-4e85-bfbd-c3ec419229c5-part1', 'scsi-SQEMU_QEMU_HARDDISK_525ad027-7e06-4e85-bfbd-c3ec419229c5-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_525ad027-7e06-4e85-bfbd-c3ec419229c5-part14', 'scsi-SQEMU_QEMU_HARDDISK_525ad027-7e06-4e85-bfbd-c3ec419229c5-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_525ad027-7e06-4e85-bfbd-c3ec419229c5-part15', 'scsi-SQEMU_QEMU_HARDDISK_525ad027-7e06-4e85-bfbd-c3ec419229c5-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_525ad027-7e06-4e85-bfbd-c3ec419229c5-part16', 'scsi-SQEMU_QEMU_HARDDISK_525ad027-7e06-4e85-bfbd-c3ec419229c5-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:03:26.654276 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.654287 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--7717ad38--094f--5aa6--8c39--f28029f817d5-osd--block--7717ad38--094f--5aa6--8c39--f28029f817d5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-RvL6nI-efKD-n08O-aERP-reg2-htTb-PCRtWf', 'scsi-0QEMU_QEMU_HARDDISK_a9241271-625e-4229-94b1-3d99bba363ae', 'scsi-SQEMU_QEMU_HARDDISK_a9241271-625e-4229-94b1-3d99bba363ae'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:03:26.654299 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--6fa9e552--f12f--547e--b45f--d034b93383af-osd--block--6fa9e552--f12f--547e--b45f--d034b93383af'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-zzTtsS-E409-dm9d-MlZ3-FUhB-HMBq-AagOCA', 'scsi-0QEMU_QEMU_HARDDISK_1a9ee9a4-914c-40fd-b835-c38474fb60e8', 'scsi-SQEMU_QEMU_HARDDISK_1a9ee9a4-914c-40fd-b835-c38474fb60e8'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:03:26.654314 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9b14a296-0b0f-456e-ac69-f453c0a27a39', 'scsi-SQEMU_QEMU_HARDDISK_9b14a296-0b0f-456e-ac69-f453c0a27a39'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:03:26.654326 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-31-19-16-02-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:03:26.654411 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--edfa5e9a--3f1a--54c1--83f4--345bb781a14b-osd--block--edfa5e9a--3f1a--54c1--83f4--345bb781a14b', 'dm-uuid-LVM-SorhS3YnnzfqLsHFgec6B7zbheRJ3TQle3cHcPsL0QlTUfAhaqCIQE3oS8Ac1I4s'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:03:26.654427 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a23536e0--7351--5f09--a3c0--98b1bc7f8fff-osd--block--a23536e0--7351--5f09--a3c0--98b1bc7f8fff', 'dm-uuid-LVM-7sjYFXr122RUfTn8ayUVUcjwjrsm5zStAZfHFxIaU6C0z0vjASgZwS5CY2oeReQU'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:03:26.654437 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.654448 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:03:26.654464 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:03:26.654474 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:03:26.654485 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:03:26.654572 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:03:26.654587 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:03:26.654598 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:03:26.654609 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:03:26.654685 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fbeeb0f2-d22c-4c2b-ae21-9c150f637ac0', 'scsi-SQEMU_QEMU_HARDDISK_fbeeb0f2-d22c-4c2b-ae21-9c150f637ac0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fbeeb0f2-d22c-4c2b-ae21-9c150f637ac0-part1', 'scsi-SQEMU_QEMU_HARDDISK_fbeeb0f2-d22c-4c2b-ae21-9c150f637ac0-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fbeeb0f2-d22c-4c2b-ae21-9c150f637ac0-part14', 'scsi-SQEMU_QEMU_HARDDISK_fbeeb0f2-d22c-4c2b-ae21-9c150f637ac0-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fbeeb0f2-d22c-4c2b-ae21-9c150f637ac0-part15', 'scsi-SQEMU_QEMU_HARDDISK_fbeeb0f2-d22c-4c2b-ae21-9c150f637ac0-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fbeeb0f2-d22c-4c2b-ae21-9c150f637ac0-part16', 'scsi-SQEMU_QEMU_HARDDISK_fbeeb0f2-d22c-4c2b-ae21-9c150f637ac0-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:03:26.654729 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--edfa5e9a--3f1a--54c1--83f4--345bb781a14b-osd--block--edfa5e9a--3f1a--54c1--83f4--345bb781a14b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-2ewEst-EnqW-BGCE-wqOa-iD2n-MpCt-rUsJeG', 'scsi-0QEMU_QEMU_HARDDISK_6d52f885-97ca-45c7-bd6a-7862e27ed465', 'scsi-SQEMU_QEMU_HARDDISK_6d52f885-97ca-45c7-bd6a-7862e27ed465'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:03:26.654740 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--a23536e0--7351--5f09--a3c0--98b1bc7f8fff-osd--block--a23536e0--7351--5f09--a3c0--98b1bc7f8fff'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-vFimBm-nggg-lxYO-Mu1y-5ASg-Qvo5-TxbLvb', 'scsi-0QEMU_QEMU_HARDDISK_727d26bd-0ead-422c-920c-32fac6429b39', 'scsi-SQEMU_QEMU_HARDDISK_727d26bd-0ead-422c-920c-32fac6429b39'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:03:26.654755 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4f6392d-f8e1-4809-8c10-779f08f2c642', 'scsi-SQEMU_QEMU_HARDDISK_d4f6392d-f8e1-4809-8c10-779f08f2c642'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:03:26.654765 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-31-19-16-13-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:03:26.654782 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.654791 | orchestrator | 2025-05-31 21:03:26.654801 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-05-31 21:03:26.654812 | orchestrator | Saturday 31 May 2025 20:52:54 +0000 (0:00:01.884) 0:00:33.060 ********** 2025-05-31 21:03:26.654822 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:03:26.654832 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:03:26.654841 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:03:26.654949 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:03:26.654965 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:03:26.654993 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:03:26.655003 | orchestrator | 2025-05-31 21:03:26.655012 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-05-31 21:03:26.655022 | orchestrator | Saturday 31 May 2025 20:52:55 +0000 (0:00:01.319) 0:00:34.379 ********** 2025-05-31 21:03:26.655032 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:03:26.655041 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:03:26.655051 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:03:26.655060 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:03:26.655069 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:03:26.655079 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:03:26.655088 | orchestrator | 2025-05-31 21:03:26.655098 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-05-31 21:03:26.655107 | orchestrator | Saturday 31 May 2025 20:52:56 +0000 (0:00:00.680) 0:00:35.059 ********** 2025-05-31 21:03:26.655117 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.655126 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.655136 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.655145 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.655155 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.655164 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.655174 | orchestrator | 2025-05-31 21:03:26.655183 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-05-31 21:03:26.655193 | orchestrator | Saturday 31 May 2025 20:52:57 +0000 (0:00:00.857) 0:00:35.917 ********** 2025-05-31 21:03:26.655202 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.655212 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.655221 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.655230 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.655240 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.655249 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.655259 | orchestrator | 2025-05-31 21:03:26.655269 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-05-31 21:03:26.655278 | orchestrator | Saturday 31 May 2025 20:52:58 +0000 (0:00:01.137) 0:00:37.054 ********** 2025-05-31 21:03:26.655288 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.655297 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.655307 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.655316 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.655326 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.655335 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.655345 | orchestrator | 2025-05-31 21:03:26.655354 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-05-31 21:03:26.655364 | orchestrator | Saturday 31 May 2025 20:52:59 +0000 (0:00:00.993) 0:00:38.048 ********** 2025-05-31 21:03:26.655373 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.655383 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.655400 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.655410 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.655419 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.655429 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.655438 | orchestrator | 2025-05-31 21:03:26.655448 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-05-31 21:03:26.655457 | orchestrator | Saturday 31 May 2025 20:53:00 +0000 (0:00:01.121) 0:00:39.169 ********** 2025-05-31 21:03:26.655466 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-31 21:03:26.655476 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-05-31 21:03:26.655486 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-05-31 21:03:26.655500 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-05-31 21:03:26.655510 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-05-31 21:03:26.655519 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-05-31 21:03:26.655529 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-05-31 21:03:26.655538 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-05-31 21:03:26.655548 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-05-31 21:03:26.655557 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-05-31 21:03:26.655566 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-05-31 21:03:26.655578 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-05-31 21:03:26.655589 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-05-31 21:03:26.655600 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-05-31 21:03:26.655611 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-05-31 21:03:26.655622 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-05-31 21:03:26.655633 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-05-31 21:03:26.655644 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-05-31 21:03:26.655656 | orchestrator | 2025-05-31 21:03:26.655667 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-05-31 21:03:26.655678 | orchestrator | Saturday 31 May 2025 20:53:04 +0000 (0:00:03.803) 0:00:42.973 ********** 2025-05-31 21:03:26.655690 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-31 21:03:26.655702 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-31 21:03:26.655713 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-31 21:03:26.655723 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.655734 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-05-31 21:03:26.655745 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-05-31 21:03:26.655756 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-05-31 21:03:26.655766 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.655777 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-05-31 21:03:26.655789 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-05-31 21:03:26.655799 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-05-31 21:03:26.655810 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.655851 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-31 21:03:26.655883 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-31 21:03:26.655894 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-31 21:03:26.655905 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.655916 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-31 21:03:26.655927 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-31 21:03:26.655936 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-31 21:03:26.655946 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.655955 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-31 21:03:26.655971 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-31 21:03:26.655981 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-31 21:03:26.655990 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.656000 | orchestrator | 2025-05-31 21:03:26.656010 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-05-31 21:03:26.656019 | orchestrator | Saturday 31 May 2025 20:53:05 +0000 (0:00:00.807) 0:00:43.780 ********** 2025-05-31 21:03:26.656029 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.656038 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.656112 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.656123 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 21:03:26.656133 | orchestrator | 2025-05-31 21:03:26.656143 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-31 21:03:26.656153 | orchestrator | Saturday 31 May 2025 20:53:06 +0000 (0:00:00.869) 0:00:44.649 ********** 2025-05-31 21:03:26.656163 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.656173 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.656182 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.656192 | orchestrator | 2025-05-31 21:03:26.656202 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-31 21:03:26.656211 | orchestrator | Saturday 31 May 2025 20:53:06 +0000 (0:00:00.422) 0:00:45.072 ********** 2025-05-31 21:03:26.656221 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.656231 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.656240 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.656250 | orchestrator | 2025-05-31 21:03:26.656260 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-31 21:03:26.656269 | orchestrator | Saturday 31 May 2025 20:53:07 +0000 (0:00:00.960) 0:00:46.033 ********** 2025-05-31 21:03:26.656279 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.656289 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.656298 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.656308 | orchestrator | 2025-05-31 21:03:26.656318 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-05-31 21:03:26.656327 | orchestrator | Saturday 31 May 2025 20:53:08 +0000 (0:00:00.437) 0:00:46.470 ********** 2025-05-31 21:03:26.656337 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:03:26.656347 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:03:26.656356 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:03:26.656366 | orchestrator | 2025-05-31 21:03:26.656375 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-05-31 21:03:26.656385 | orchestrator | Saturday 31 May 2025 20:53:08 +0000 (0:00:00.564) 0:00:47.034 ********** 2025-05-31 21:03:26.656400 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-31 21:03:26.656410 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-31 21:03:26.656420 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-31 21:03:26.656429 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.656439 | orchestrator | 2025-05-31 21:03:26.656449 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-31 21:03:26.656458 | orchestrator | Saturday 31 May 2025 20:53:08 +0000 (0:00:00.318) 0:00:47.353 ********** 2025-05-31 21:03:26.656468 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-31 21:03:26.656478 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-31 21:03:26.656487 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-31 21:03:26.656497 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.656506 | orchestrator | 2025-05-31 21:03:26.656516 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-31 21:03:26.656533 | orchestrator | Saturday 31 May 2025 20:53:09 +0000 (0:00:00.463) 0:00:47.816 ********** 2025-05-31 21:03:26.656543 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-31 21:03:26.656552 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-31 21:03:26.656562 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-31 21:03:26.656572 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.656581 | orchestrator | 2025-05-31 21:03:26.656591 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-05-31 21:03:26.656600 | orchestrator | Saturday 31 May 2025 20:53:09 +0000 (0:00:00.616) 0:00:48.433 ********** 2025-05-31 21:03:26.656610 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:03:26.656620 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:03:26.656629 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:03:26.656639 | orchestrator | 2025-05-31 21:03:26.656648 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-05-31 21:03:26.656658 | orchestrator | Saturday 31 May 2025 20:53:10 +0000 (0:00:00.877) 0:00:49.311 ********** 2025-05-31 21:03:26.656668 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-05-31 21:03:26.656677 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-05-31 21:03:26.656687 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-05-31 21:03:26.656696 | orchestrator | 2025-05-31 21:03:26.656706 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-05-31 21:03:26.656716 | orchestrator | Saturday 31 May 2025 20:53:11 +0000 (0:00:00.985) 0:00:50.297 ********** 2025-05-31 21:03:26.656759 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-31 21:03:26.656771 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-31 21:03:26.656780 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-31 21:03:26.656790 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-05-31 21:03:26.656800 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-31 21:03:26.656809 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-31 21:03:26.656819 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-31 21:03:26.656828 | orchestrator | 2025-05-31 21:03:26.656838 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-05-31 21:03:26.656848 | orchestrator | Saturday 31 May 2025 20:53:12 +0000 (0:00:00.941) 0:00:51.238 ********** 2025-05-31 21:03:26.656916 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-31 21:03:26.656927 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-31 21:03:26.656936 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-31 21:03:26.656946 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-05-31 21:03:26.656955 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-31 21:03:26.656965 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-31 21:03:26.656974 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-31 21:03:26.656984 | orchestrator | 2025-05-31 21:03:26.656993 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-05-31 21:03:26.657002 | orchestrator | Saturday 31 May 2025 20:53:14 +0000 (0:00:02.019) 0:00:53.258 ********** 2025-05-31 21:03:26.657010 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 21:03:26.657020 | orchestrator | 2025-05-31 21:03:26.657028 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-05-31 21:03:26.657036 | orchestrator | Saturday 31 May 2025 20:53:15 +0000 (0:00:00.829) 0:00:54.087 ********** 2025-05-31 21:03:26.657051 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 21:03:26.657059 | orchestrator | 2025-05-31 21:03:26.657066 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-05-31 21:03:26.657074 | orchestrator | Saturday 31 May 2025 20:53:16 +0000 (0:00:01.025) 0:00:55.113 ********** 2025-05-31 21:03:26.657082 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:03:26.657090 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:03:26.657098 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.657105 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.657113 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:03:26.657125 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.657133 | orchestrator | 2025-05-31 21:03:26.657141 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-05-31 21:03:26.657149 | orchestrator | Saturday 31 May 2025 20:53:17 +0000 (0:00:01.172) 0:00:56.285 ********** 2025-05-31 21:03:26.657157 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.657165 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.657173 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.657181 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:03:26.657188 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:03:26.657196 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:03:26.657204 | orchestrator | 2025-05-31 21:03:26.657212 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-05-31 21:03:26.657220 | orchestrator | Saturday 31 May 2025 20:53:19 +0000 (0:00:01.753) 0:00:58.039 ********** 2025-05-31 21:03:26.657227 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.657235 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.657243 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.657251 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:03:26.657259 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:03:26.657267 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:03:26.657274 | orchestrator | 2025-05-31 21:03:26.657282 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-05-31 21:03:26.657290 | orchestrator | Saturday 31 May 2025 20:53:20 +0000 (0:00:01.115) 0:00:59.154 ********** 2025-05-31 21:03:26.657298 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.657306 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.657314 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.657321 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:03:26.657329 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:03:26.657337 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:03:26.657345 | orchestrator | 2025-05-31 21:03:26.657352 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-05-31 21:03:26.657360 | orchestrator | Saturday 31 May 2025 20:53:21 +0000 (0:00:01.090) 0:01:00.245 ********** 2025-05-31 21:03:26.657368 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:03:26.657376 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.657384 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:03:26.657392 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.657399 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:03:26.657407 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.657415 | orchestrator | 2025-05-31 21:03:26.657423 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-05-31 21:03:26.657431 | orchestrator | Saturday 31 May 2025 20:53:23 +0000 (0:00:01.353) 0:01:01.599 ********** 2025-05-31 21:03:26.657465 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.657475 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.657518 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.657527 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.657535 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.657543 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.657557 | orchestrator | 2025-05-31 21:03:26.657565 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-05-31 21:03:26.657573 | orchestrator | Saturday 31 May 2025 20:53:23 +0000 (0:00:00.748) 0:01:02.347 ********** 2025-05-31 21:03:26.657581 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.657589 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.657596 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.657604 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.657612 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.657620 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.657628 | orchestrator | 2025-05-31 21:03:26.657635 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-05-31 21:03:26.657643 | orchestrator | Saturday 31 May 2025 20:53:24 +0000 (0:00:01.071) 0:01:03.418 ********** 2025-05-31 21:03:26.657651 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:03:26.657659 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:03:26.657667 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:03:26.657674 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:03:26.657682 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:03:26.657690 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:03:26.657698 | orchestrator | 2025-05-31 21:03:26.657705 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-05-31 21:03:26.657713 | orchestrator | Saturday 31 May 2025 20:53:26 +0000 (0:00:01.088) 0:01:04.507 ********** 2025-05-31 21:03:26.657721 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:03:26.657729 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:03:26.657737 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:03:26.657744 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:03:26.657752 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:03:26.657760 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:03:26.657768 | orchestrator | 2025-05-31 21:03:26.657775 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-05-31 21:03:26.657783 | orchestrator | Saturday 31 May 2025 20:53:27 +0000 (0:00:01.529) 0:01:06.037 ********** 2025-05-31 21:03:26.657791 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.657799 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.657807 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.657815 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.657822 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.657830 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.657838 | orchestrator | 2025-05-31 21:03:26.657846 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-05-31 21:03:26.657854 | orchestrator | Saturday 31 May 2025 20:53:28 +0000 (0:00:00.573) 0:01:06.611 ********** 2025-05-31 21:03:26.657878 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:03:26.657886 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:03:26.657893 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:03:26.657901 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.657909 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.657917 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.657924 | orchestrator | 2025-05-31 21:03:26.657932 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-05-31 21:03:26.657940 | orchestrator | Saturday 31 May 2025 20:53:28 +0000 (0:00:00.820) 0:01:07.431 ********** 2025-05-31 21:03:26.657948 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.657956 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.657963 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.657971 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:03:26.657983 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:03:26.657991 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:03:26.657999 | orchestrator | 2025-05-31 21:03:26.658007 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-05-31 21:03:26.658050 | orchestrator | Saturday 31 May 2025 20:53:29 +0000 (0:00:00.724) 0:01:08.155 ********** 2025-05-31 21:03:26.658061 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.658075 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.658083 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.658091 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:03:26.658098 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:03:26.658106 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:03:26.658114 | orchestrator | 2025-05-31 21:03:26.658122 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-05-31 21:03:26.658130 | orchestrator | Saturday 31 May 2025 20:53:30 +0000 (0:00:00.845) 0:01:09.000 ********** 2025-05-31 21:03:26.658138 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.658146 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.658154 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.658161 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:03:26.658169 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:03:26.658177 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:03:26.658185 | orchestrator | 2025-05-31 21:03:26.658193 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-05-31 21:03:26.658201 | orchestrator | Saturday 31 May 2025 20:53:31 +0000 (0:00:00.638) 0:01:09.638 ********** 2025-05-31 21:03:26.658209 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.658217 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.658224 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.658233 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.658240 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.658248 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.658256 | orchestrator | 2025-05-31 21:03:26.658264 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-05-31 21:03:26.658272 | orchestrator | Saturday 31 May 2025 20:53:32 +0000 (0:00:00.891) 0:01:10.529 ********** 2025-05-31 21:03:26.658279 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.658288 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.658295 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.658303 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.658311 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.658319 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.658326 | orchestrator | 2025-05-31 21:03:26.658334 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-05-31 21:03:26.658379 | orchestrator | Saturday 31 May 2025 20:53:32 +0000 (0:00:00.553) 0:01:11.083 ********** 2025-05-31 21:03:26.658389 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:03:26.658397 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:03:26.658405 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:03:26.658413 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.658421 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.658429 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.658436 | orchestrator | 2025-05-31 21:03:26.658444 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-05-31 21:03:26.658452 | orchestrator | Saturday 31 May 2025 20:53:33 +0000 (0:00:00.777) 0:01:11.860 ********** 2025-05-31 21:03:26.658460 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:03:26.658467 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:03:26.658475 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:03:26.658483 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:03:26.658491 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:03:26.658499 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:03:26.658506 | orchestrator | 2025-05-31 21:03:26.658514 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-05-31 21:03:26.658522 | orchestrator | Saturday 31 May 2025 20:53:34 +0000 (0:00:00.601) 0:01:12.462 ********** 2025-05-31 21:03:26.658530 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:03:26.658537 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:03:26.658545 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:03:26.658553 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:03:26.658561 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:03:26.658578 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:03:26.658586 | orchestrator | 2025-05-31 21:03:26.658594 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2025-05-31 21:03:26.658601 | orchestrator | Saturday 31 May 2025 20:53:35 +0000 (0:00:01.230) 0:01:13.692 ********** 2025-05-31 21:03:26.658609 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:03:26.658617 | orchestrator | changed: [testbed-node-2] 2025-05-31 21:03:26.658625 | orchestrator | changed: [testbed-node-1] 2025-05-31 21:03:26.658633 | orchestrator | changed: [testbed-node-3] 2025-05-31 21:03:26.658641 | orchestrator | changed: [testbed-node-4] 2025-05-31 21:03:26.658649 | orchestrator | changed: [testbed-node-5] 2025-05-31 21:03:26.658657 | orchestrator | 2025-05-31 21:03:26.658665 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2025-05-31 21:03:26.658673 | orchestrator | Saturday 31 May 2025 20:53:36 +0000 (0:00:01.629) 0:01:15.321 ********** 2025-05-31 21:03:26.658680 | orchestrator | changed: [testbed-node-2] 2025-05-31 21:03:26.658688 | orchestrator | changed: [testbed-node-1] 2025-05-31 21:03:26.658696 | orchestrator | changed: [testbed-node-3] 2025-05-31 21:03:26.658703 | orchestrator | changed: [testbed-node-4] 2025-05-31 21:03:26.658711 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:03:26.658719 | orchestrator | changed: [testbed-node-5] 2025-05-31 21:03:26.658726 | orchestrator | 2025-05-31 21:03:26.658734 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2025-05-31 21:03:26.658742 | orchestrator | Saturday 31 May 2025 20:53:38 +0000 (0:00:01.844) 0:01:17.166 ********** 2025-05-31 21:03:26.658750 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 21:03:26.658758 | orchestrator | 2025-05-31 21:03:26.658766 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2025-05-31 21:03:26.658788 | orchestrator | Saturday 31 May 2025 20:53:39 +0000 (0:00:01.171) 0:01:18.338 ********** 2025-05-31 21:03:26.658796 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.658804 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.658812 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.658820 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.658832 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.658840 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.658848 | orchestrator | 2025-05-31 21:03:26.658871 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2025-05-31 21:03:26.658880 | orchestrator | Saturday 31 May 2025 20:53:40 +0000 (0:00:00.880) 0:01:19.219 ********** 2025-05-31 21:03:26.658923 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.658932 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.658939 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.658947 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.658955 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.658963 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.658971 | orchestrator | 2025-05-31 21:03:26.658979 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2025-05-31 21:03:26.658987 | orchestrator | Saturday 31 May 2025 20:53:41 +0000 (0:00:00.713) 0:01:19.933 ********** 2025-05-31 21:03:26.658995 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-31 21:03:26.659003 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-31 21:03:26.659011 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-31 21:03:26.659019 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-31 21:03:26.659026 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-31 21:03:26.659034 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-31 21:03:26.659042 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-31 21:03:26.659056 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-31 21:03:26.659064 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-31 21:03:26.659072 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-31 21:03:26.659080 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-31 21:03:26.659088 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-31 21:03:26.659096 | orchestrator | 2025-05-31 21:03:26.659132 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2025-05-31 21:03:26.659142 | orchestrator | Saturday 31 May 2025 20:53:43 +0000 (0:00:01.618) 0:01:21.552 ********** 2025-05-31 21:03:26.659150 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:03:26.659158 | orchestrator | changed: [testbed-node-1] 2025-05-31 21:03:26.659165 | orchestrator | changed: [testbed-node-2] 2025-05-31 21:03:26.659173 | orchestrator | changed: [testbed-node-3] 2025-05-31 21:03:26.659181 | orchestrator | changed: [testbed-node-4] 2025-05-31 21:03:26.659189 | orchestrator | changed: [testbed-node-5] 2025-05-31 21:03:26.659196 | orchestrator | 2025-05-31 21:03:26.659204 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2025-05-31 21:03:26.659212 | orchestrator | Saturday 31 May 2025 20:53:44 +0000 (0:00:00.915) 0:01:22.467 ********** 2025-05-31 21:03:26.659224 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.659237 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.659246 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.659253 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.659261 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.659274 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.659286 | orchestrator | 2025-05-31 21:03:26.659294 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2025-05-31 21:03:26.659302 | orchestrator | Saturday 31 May 2025 20:53:44 +0000 (0:00:00.799) 0:01:23.267 ********** 2025-05-31 21:03:26.659312 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.659326 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.659335 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.659345 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.659358 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.659367 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.659374 | orchestrator | 2025-05-31 21:03:26.659382 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2025-05-31 21:03:26.659390 | orchestrator | Saturday 31 May 2025 20:53:45 +0000 (0:00:00.583) 0:01:23.851 ********** 2025-05-31 21:03:26.659398 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.659405 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.659413 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.659421 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.659428 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.659436 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.659444 | orchestrator | 2025-05-31 21:03:26.659451 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2025-05-31 21:03:26.659459 | orchestrator | Saturday 31 May 2025 20:53:46 +0000 (0:00:00.836) 0:01:24.687 ********** 2025-05-31 21:03:26.659467 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 21:03:26.659475 | orchestrator | 2025-05-31 21:03:26.659483 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2025-05-31 21:03:26.659491 | orchestrator | Saturday 31 May 2025 20:53:47 +0000 (0:00:01.185) 0:01:25.872 ********** 2025-05-31 21:03:26.659499 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:03:26.659507 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:03:26.659520 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:03:26.659528 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:03:26.659535 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:03:26.659543 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:03:26.659551 | orchestrator | 2025-05-31 21:03:26.659559 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2025-05-31 21:03:26.659571 | orchestrator | Saturday 31 May 2025 20:54:48 +0000 (0:01:01.491) 0:02:27.364 ********** 2025-05-31 21:03:26.659579 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-31 21:03:26.659587 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-31 21:03:26.659594 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-31 21:03:26.659602 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.659610 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-31 21:03:26.659618 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-31 21:03:26.659625 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-31 21:03:26.659633 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.659641 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-31 21:03:26.659649 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-31 21:03:26.659656 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-31 21:03:26.659664 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.659672 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-31 21:03:26.659680 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-31 21:03:26.659687 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-31 21:03:26.659695 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.659703 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-31 21:03:26.659711 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-31 21:03:26.659719 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-31 21:03:26.659726 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.659734 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-31 21:03:26.659742 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-31 21:03:26.659750 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-31 21:03:26.659781 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.659790 | orchestrator | 2025-05-31 21:03:26.659798 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2025-05-31 21:03:26.659806 | orchestrator | Saturday 31 May 2025 20:54:49 +0000 (0:00:00.872) 0:02:28.236 ********** 2025-05-31 21:03:26.659814 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.659822 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.659829 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.659837 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.659845 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.659853 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.659906 | orchestrator | 2025-05-31 21:03:26.659915 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2025-05-31 21:03:26.659923 | orchestrator | Saturday 31 May 2025 20:54:50 +0000 (0:00:00.601) 0:02:28.838 ********** 2025-05-31 21:03:26.659931 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.659939 | orchestrator | 2025-05-31 21:03:26.659947 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2025-05-31 21:03:26.659954 | orchestrator | Saturday 31 May 2025 20:54:50 +0000 (0:00:00.254) 0:02:29.092 ********** 2025-05-31 21:03:26.659969 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.659976 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.659984 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.659992 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.660000 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.660008 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.660016 | orchestrator | 2025-05-31 21:03:26.660024 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2025-05-31 21:03:26.660031 | orchestrator | Saturday 31 May 2025 20:54:51 +0000 (0:00:01.183) 0:02:30.276 ********** 2025-05-31 21:03:26.660039 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.660047 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.660055 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.660063 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.660071 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.660078 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.660086 | orchestrator | 2025-05-31 21:03:26.660094 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2025-05-31 21:03:26.660102 | orchestrator | Saturday 31 May 2025 20:54:52 +0000 (0:00:00.776) 0:02:31.052 ********** 2025-05-31 21:03:26.660110 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.660118 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.660126 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.660134 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.660141 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.660149 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.660157 | orchestrator | 2025-05-31 21:03:26.660165 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2025-05-31 21:03:26.660173 | orchestrator | Saturday 31 May 2025 20:54:53 +0000 (0:00:01.246) 0:02:32.298 ********** 2025-05-31 21:03:26.660181 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:03:26.660189 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:03:26.660197 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:03:26.660205 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:03:26.660212 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:03:26.660220 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:03:26.660228 | orchestrator | 2025-05-31 21:03:26.660236 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2025-05-31 21:03:26.660244 | orchestrator | Saturday 31 May 2025 20:54:55 +0000 (0:00:02.119) 0:02:34.417 ********** 2025-05-31 21:03:26.660251 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:03:26.660259 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:03:26.660271 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:03:26.660279 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:03:26.660287 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:03:26.660295 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:03:26.660303 | orchestrator | 2025-05-31 21:03:26.660311 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2025-05-31 21:03:26.660318 | orchestrator | Saturday 31 May 2025 20:54:56 +0000 (0:00:00.716) 0:02:35.134 ********** 2025-05-31 21:03:26.660325 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 21:03:26.660332 | orchestrator | 2025-05-31 21:03:26.660339 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2025-05-31 21:03:26.660346 | orchestrator | Saturday 31 May 2025 20:54:58 +0000 (0:00:01.306) 0:02:36.441 ********** 2025-05-31 21:03:26.660352 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.660359 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.660366 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.660372 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.660379 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.660385 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.660396 | orchestrator | 2025-05-31 21:03:26.660403 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2025-05-31 21:03:26.660410 | orchestrator | Saturday 31 May 2025 20:54:58 +0000 (0:00:00.583) 0:02:37.024 ********** 2025-05-31 21:03:26.660416 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.660423 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.660430 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.660436 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.660443 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.660449 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.660456 | orchestrator | 2025-05-31 21:03:26.660463 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2025-05-31 21:03:26.660470 | orchestrator | Saturday 31 May 2025 20:54:59 +0000 (0:00:00.686) 0:02:37.711 ********** 2025-05-31 21:03:26.660476 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.660483 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.660489 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.660496 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.660503 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.660509 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.660516 | orchestrator | 2025-05-31 21:03:26.660522 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2025-05-31 21:03:26.660550 | orchestrator | Saturday 31 May 2025 20:54:59 +0000 (0:00:00.652) 0:02:38.364 ********** 2025-05-31 21:03:26.660557 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.660564 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.660571 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.660577 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.660584 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.660590 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.660597 | orchestrator | 2025-05-31 21:03:26.660603 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2025-05-31 21:03:26.660610 | orchestrator | Saturday 31 May 2025 20:55:00 +0000 (0:00:00.740) 0:02:39.104 ********** 2025-05-31 21:03:26.660616 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.660623 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.660629 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.660636 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.660642 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.660649 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.660656 | orchestrator | 2025-05-31 21:03:26.660668 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2025-05-31 21:03:26.660676 | orchestrator | Saturday 31 May 2025 20:55:01 +0000 (0:00:00.619) 0:02:39.724 ********** 2025-05-31 21:03:26.660683 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.660689 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.660696 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.660703 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.660709 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.660716 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.660722 | orchestrator | 2025-05-31 21:03:26.660729 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2025-05-31 21:03:26.660735 | orchestrator | Saturday 31 May 2025 20:55:02 +0000 (0:00:00.834) 0:02:40.558 ********** 2025-05-31 21:03:26.660742 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.660748 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.660755 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.660761 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.660768 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.660774 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.660781 | orchestrator | 2025-05-31 21:03:26.660787 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2025-05-31 21:03:26.660794 | orchestrator | Saturday 31 May 2025 20:55:02 +0000 (0:00:00.811) 0:02:41.370 ********** 2025-05-31 21:03:26.660805 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.660812 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.660819 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.660825 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.660832 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.660838 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.660845 | orchestrator | 2025-05-31 21:03:26.660851 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2025-05-31 21:03:26.660873 | orchestrator | Saturday 31 May 2025 20:55:03 +0000 (0:00:00.782) 0:02:42.152 ********** 2025-05-31 21:03:26.660879 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:03:26.660886 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:03:26.660893 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:03:26.660899 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:03:26.660906 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:03:26.660912 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:03:26.660919 | orchestrator | 2025-05-31 21:03:26.660925 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2025-05-31 21:03:26.660932 | orchestrator | Saturday 31 May 2025 20:55:04 +0000 (0:00:01.149) 0:02:43.304 ********** 2025-05-31 21:03:26.660943 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 21:03:26.660950 | orchestrator | 2025-05-31 21:03:26.660957 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2025-05-31 21:03:26.660963 | orchestrator | Saturday 31 May 2025 20:55:05 +0000 (0:00:00.996) 0:02:44.300 ********** 2025-05-31 21:03:26.660970 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-05-31 21:03:26.660977 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-05-31 21:03:26.660984 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-05-31 21:03:26.660990 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-05-31 21:03:26.660997 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-05-31 21:03:26.661003 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-05-31 21:03:26.661010 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-05-31 21:03:26.661016 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-05-31 21:03:26.661023 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-05-31 21:03:26.661030 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-05-31 21:03:26.661036 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-05-31 21:03:26.661043 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-05-31 21:03:26.661050 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-05-31 21:03:26.661056 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-05-31 21:03:26.661063 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-05-31 21:03:26.661070 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-05-31 21:03:26.661076 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-05-31 21:03:26.661083 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-05-31 21:03:26.661089 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-05-31 21:03:26.661096 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-05-31 21:03:26.661102 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-05-31 21:03:26.661130 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-05-31 21:03:26.661139 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-05-31 21:03:26.661145 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-05-31 21:03:26.661152 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-05-31 21:03:26.661163 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-05-31 21:03:26.661170 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-05-31 21:03:26.661176 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-05-31 21:03:26.661183 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-05-31 21:03:26.661189 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-05-31 21:03:26.661196 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-05-31 21:03:26.661202 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-05-31 21:03:26.661209 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-05-31 21:03:26.661216 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-05-31 21:03:26.661222 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-05-31 21:03:26.661229 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2025-05-31 21:03:26.661235 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-05-31 21:03:26.661242 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2025-05-31 21:03:26.661248 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2025-05-31 21:03:26.661255 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2025-05-31 21:03:26.661262 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2025-05-31 21:03:26.661268 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2025-05-31 21:03:26.661275 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-05-31 21:03:26.661281 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-05-31 21:03:26.661288 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-05-31 21:03:26.661294 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-05-31 21:03:26.661301 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-05-31 21:03:26.661307 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-05-31 21:03:26.661314 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-31 21:03:26.661321 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-31 21:03:26.661327 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-31 21:03:26.661334 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-31 21:03:26.661340 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-31 21:03:26.661347 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-31 21:03:26.661353 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-31 21:03:26.661360 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-31 21:03:26.661370 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-31 21:03:26.661377 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-31 21:03:26.661383 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-31 21:03:26.661390 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-31 21:03:26.661396 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-31 21:03:26.661403 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-31 21:03:26.661410 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-31 21:03:26.661416 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-31 21:03:26.661423 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-31 21:03:26.661429 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-31 21:03:26.661436 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-31 21:03:26.661450 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-31 21:03:26.661456 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-31 21:03:26.661463 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-31 21:03:26.661469 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-31 21:03:26.661476 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-31 21:03:26.661482 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-31 21:03:26.661489 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-31 21:03:26.661495 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-31 21:03:26.661502 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-31 21:03:26.661508 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-31 21:03:26.661515 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-31 21:03:26.661521 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-31 21:03:26.661547 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-31 21:03:26.661555 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-31 21:03:26.661561 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-31 21:03:26.661568 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-31 21:03:26.661574 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-31 21:03:26.661581 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-05-31 21:03:26.661588 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-05-31 21:03:26.661594 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-05-31 21:03:26.661601 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-05-31 21:03:26.661608 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-05-31 21:03:26.661614 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-05-31 21:03:26.661621 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-05-31 21:03:26.661627 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-05-31 21:03:26.661634 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-05-31 21:03:26.661640 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-05-31 21:03:26.661647 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-05-31 21:03:26.661654 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-05-31 21:03:26.661660 | orchestrator | 2025-05-31 21:03:26.661667 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-31 21:03:26.661674 | orchestrator | Saturday 31 May 2025 20:55:11 +0000 (0:00:06.100) 0:02:50.400 ********** 2025-05-31 21:03:26.661680 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.661687 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.661693 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.661700 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 21:03:26.661707 | orchestrator | 2025-05-31 21:03:26.661714 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2025-05-31 21:03:26.661721 | orchestrator | Saturday 31 May 2025 20:55:12 +0000 (0:00:01.027) 0:02:51.428 ********** 2025-05-31 21:03:26.661727 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-05-31 21:03:26.661735 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-05-31 21:03:26.661746 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-05-31 21:03:26.661752 | orchestrator | 2025-05-31 21:03:26.661759 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2025-05-31 21:03:26.661766 | orchestrator | Saturday 31 May 2025 20:55:13 +0000 (0:00:00.677) 0:02:52.105 ********** 2025-05-31 21:03:26.661776 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-05-31 21:03:26.661783 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-05-31 21:03:26.661790 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-05-31 21:03:26.661796 | orchestrator | 2025-05-31 21:03:26.661803 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2025-05-31 21:03:26.661810 | orchestrator | Saturday 31 May 2025 20:55:15 +0000 (0:00:01.563) 0:02:53.669 ********** 2025-05-31 21:03:26.661816 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.661823 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.661830 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.661836 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:03:26.661843 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:03:26.661849 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:03:26.661870 | orchestrator | 2025-05-31 21:03:26.661877 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2025-05-31 21:03:26.661884 | orchestrator | Saturday 31 May 2025 20:55:15 +0000 (0:00:00.665) 0:02:54.338 ********** 2025-05-31 21:03:26.661890 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.661897 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.661904 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.661910 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:03:26.661917 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:03:26.661923 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:03:26.661930 | orchestrator | 2025-05-31 21:03:26.661937 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2025-05-31 21:03:26.661944 | orchestrator | Saturday 31 May 2025 20:55:16 +0000 (0:00:00.927) 0:02:55.265 ********** 2025-05-31 21:03:26.661950 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.661957 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.661963 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.661970 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.661977 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.661983 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.661990 | orchestrator | 2025-05-31 21:03:26.661996 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2025-05-31 21:03:26.662003 | orchestrator | Saturday 31 May 2025 20:55:17 +0000 (0:00:00.561) 0:02:55.827 ********** 2025-05-31 21:03:26.662010 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.662038 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.662066 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.662074 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.662081 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.662087 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.662094 | orchestrator | 2025-05-31 21:03:26.662101 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2025-05-31 21:03:26.662109 | orchestrator | Saturday 31 May 2025 20:55:18 +0000 (0:00:00.775) 0:02:56.602 ********** 2025-05-31 21:03:26.662121 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.662129 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.662135 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.662141 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.662148 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.662161 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.662168 | orchestrator | 2025-05-31 21:03:26.662175 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-31 21:03:26.662181 | orchestrator | Saturday 31 May 2025 20:55:18 +0000 (0:00:00.572) 0:02:57.175 ********** 2025-05-31 21:03:26.662188 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.662195 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.662201 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.662208 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.662214 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.662221 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.662227 | orchestrator | 2025-05-31 21:03:26.662234 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-31 21:03:26.662240 | orchestrator | Saturday 31 May 2025 20:55:19 +0000 (0:00:00.681) 0:02:57.857 ********** 2025-05-31 21:03:26.662247 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.662254 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.662260 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.662266 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.662273 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.662279 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.662286 | orchestrator | 2025-05-31 21:03:26.662303 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-31 21:03:26.662311 | orchestrator | Saturday 31 May 2025 20:55:20 +0000 (0:00:00.598) 0:02:58.455 ********** 2025-05-31 21:03:26.662317 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.662324 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.662330 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.662337 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.662343 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.662350 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.662356 | orchestrator | 2025-05-31 21:03:26.662363 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-31 21:03:26.662370 | orchestrator | Saturday 31 May 2025 20:55:20 +0000 (0:00:00.818) 0:02:59.274 ********** 2025-05-31 21:03:26.662376 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.662383 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.662389 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.662396 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:03:26.662403 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:03:26.662409 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:03:26.662416 | orchestrator | 2025-05-31 21:03:26.662423 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2025-05-31 21:03:26.662429 | orchestrator | Saturday 31 May 2025 20:55:23 +0000 (0:00:02.987) 0:03:02.261 ********** 2025-05-31 21:03:26.662436 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.662442 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.662453 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.662460 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:03:26.662466 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:03:26.662473 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:03:26.662479 | orchestrator | 2025-05-31 21:03:26.662486 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2025-05-31 21:03:26.662493 | orchestrator | Saturday 31 May 2025 20:55:24 +0000 (0:00:00.735) 0:03:02.996 ********** 2025-05-31 21:03:26.662499 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.662506 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.662512 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.662519 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:03:26.662526 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:03:26.662532 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:03:26.662539 | orchestrator | 2025-05-31 21:03:26.662550 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2025-05-31 21:03:26.662557 | orchestrator | Saturday 31 May 2025 20:55:25 +0000 (0:00:00.613) 0:03:03.609 ********** 2025-05-31 21:03:26.662564 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.662570 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.662577 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.662584 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.662590 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.662597 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.662603 | orchestrator | 2025-05-31 21:03:26.662610 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2025-05-31 21:03:26.662616 | orchestrator | Saturday 31 May 2025 20:55:25 +0000 (0:00:00.689) 0:03:04.299 ********** 2025-05-31 21:03:26.662623 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.662630 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.662636 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.662643 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-05-31 21:03:26.662650 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-05-31 21:03:26.662657 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-05-31 21:03:26.662663 | orchestrator | 2025-05-31 21:03:26.662670 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2025-05-31 21:03:26.662703 | orchestrator | Saturday 31 May 2025 20:55:26 +0000 (0:00:00.595) 0:03:04.894 ********** 2025-05-31 21:03:26.662712 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.662719 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.662725 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.662733 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2025-05-31 21:03:26.662742 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2025-05-31 21:03:26.662750 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2025-05-31 21:03:26.662757 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2025-05-31 21:03:26.662764 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.662770 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.662777 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2025-05-31 21:03:26.662784 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2025-05-31 21:03:26.662796 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.662802 | orchestrator | 2025-05-31 21:03:26.662809 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2025-05-31 21:03:26.662816 | orchestrator | Saturday 31 May 2025 20:55:27 +0000 (0:00:00.667) 0:03:05.562 ********** 2025-05-31 21:03:26.662822 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.662834 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.662841 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.662847 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.662853 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.662872 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.662879 | orchestrator | 2025-05-31 21:03:26.662886 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2025-05-31 21:03:26.662892 | orchestrator | Saturday 31 May 2025 20:55:27 +0000 (0:00:00.534) 0:03:06.096 ********** 2025-05-31 21:03:26.662899 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.662905 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.662912 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.662918 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.662925 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.662932 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.662938 | orchestrator | 2025-05-31 21:03:26.662945 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-31 21:03:26.662952 | orchestrator | Saturday 31 May 2025 20:55:28 +0000 (0:00:00.648) 0:03:06.745 ********** 2025-05-31 21:03:26.662958 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.662965 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.662971 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.662978 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.662984 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.662991 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.662997 | orchestrator | 2025-05-31 21:03:26.663004 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-31 21:03:26.663011 | orchestrator | Saturday 31 May 2025 20:55:28 +0000 (0:00:00.539) 0:03:07.284 ********** 2025-05-31 21:03:26.663017 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.663024 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.663030 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.663037 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.663043 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.663050 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.663056 | orchestrator | 2025-05-31 21:03:26.663063 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-31 21:03:26.663070 | orchestrator | Saturday 31 May 2025 20:55:29 +0000 (0:00:00.746) 0:03:08.031 ********** 2025-05-31 21:03:26.663076 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.663083 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.663089 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.663116 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.663124 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.663130 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.663137 | orchestrator | 2025-05-31 21:03:26.663143 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-05-31 21:03:26.663150 | orchestrator | Saturday 31 May 2025 20:55:30 +0000 (0:00:00.630) 0:03:08.661 ********** 2025-05-31 21:03:26.663157 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.663163 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.663170 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.663177 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:03:26.663183 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:03:26.663195 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:03:26.663201 | orchestrator | 2025-05-31 21:03:26.663208 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-05-31 21:03:26.663215 | orchestrator | Saturday 31 May 2025 20:55:31 +0000 (0:00:01.255) 0:03:09.917 ********** 2025-05-31 21:03:26.663221 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-31 21:03:26.663228 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-31 21:03:26.663234 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-31 21:03:26.663241 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.663248 | orchestrator | 2025-05-31 21:03:26.663254 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-31 21:03:26.663261 | orchestrator | Saturday 31 May 2025 20:55:31 +0000 (0:00:00.434) 0:03:10.351 ********** 2025-05-31 21:03:26.663267 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-31 21:03:26.663274 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-31 21:03:26.663281 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-31 21:03:26.663287 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.663294 | orchestrator | 2025-05-31 21:03:26.663301 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-31 21:03:26.663307 | orchestrator | Saturday 31 May 2025 20:55:32 +0000 (0:00:00.451) 0:03:10.803 ********** 2025-05-31 21:03:26.663314 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-31 21:03:26.663320 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-31 21:03:26.663327 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-31 21:03:26.663334 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.663340 | orchestrator | 2025-05-31 21:03:26.663347 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-05-31 21:03:26.663353 | orchestrator | Saturday 31 May 2025 20:55:32 +0000 (0:00:00.434) 0:03:11.238 ********** 2025-05-31 21:03:26.663360 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.663366 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.663373 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.663380 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:03:26.663386 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:03:26.663393 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:03:26.663400 | orchestrator | 2025-05-31 21:03:26.663406 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-05-31 21:03:26.663413 | orchestrator | Saturday 31 May 2025 20:55:33 +0000 (0:00:00.719) 0:03:11.958 ********** 2025-05-31 21:03:26.663419 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-31 21:03:26.663426 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-31 21:03:26.663432 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-31 21:03:26.663439 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.663445 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.663455 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.663462 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-05-31 21:03:26.663469 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-05-31 21:03:26.663475 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-05-31 21:03:26.663482 | orchestrator | 2025-05-31 21:03:26.663488 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2025-05-31 21:03:26.663495 | orchestrator | Saturday 31 May 2025 20:55:35 +0000 (0:00:02.030) 0:03:13.989 ********** 2025-05-31 21:03:26.663501 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:03:26.663508 | orchestrator | changed: [testbed-node-1] 2025-05-31 21:03:26.663515 | orchestrator | changed: [testbed-node-2] 2025-05-31 21:03:26.663521 | orchestrator | changed: [testbed-node-3] 2025-05-31 21:03:26.663528 | orchestrator | changed: [testbed-node-4] 2025-05-31 21:03:26.663534 | orchestrator | changed: [testbed-node-5] 2025-05-31 21:03:26.663541 | orchestrator | 2025-05-31 21:03:26.663547 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-05-31 21:03:26.663558 | orchestrator | Saturday 31 May 2025 20:55:37 +0000 (0:00:02.400) 0:03:16.389 ********** 2025-05-31 21:03:26.663565 | orchestrator | changed: [testbed-node-1] 2025-05-31 21:03:26.663571 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:03:26.663578 | orchestrator | changed: [testbed-node-2] 2025-05-31 21:03:26.663584 | orchestrator | changed: [testbed-node-3] 2025-05-31 21:03:26.663591 | orchestrator | changed: [testbed-node-4] 2025-05-31 21:03:26.663597 | orchestrator | changed: [testbed-node-5] 2025-05-31 21:03:26.663604 | orchestrator | 2025-05-31 21:03:26.663610 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-05-31 21:03:26.663617 | orchestrator | Saturday 31 May 2025 20:55:39 +0000 (0:00:01.155) 0:03:17.544 ********** 2025-05-31 21:03:26.663624 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.663630 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.663637 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.663643 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 21:03:26.663650 | orchestrator | 2025-05-31 21:03:26.663656 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-05-31 21:03:26.663663 | orchestrator | Saturday 31 May 2025 20:55:40 +0000 (0:00:01.175) 0:03:18.720 ********** 2025-05-31 21:03:26.663669 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:03:26.663676 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:03:26.663682 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:03:26.663689 | orchestrator | 2025-05-31 21:03:26.663696 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-05-31 21:03:26.663721 | orchestrator | Saturday 31 May 2025 20:55:40 +0000 (0:00:00.388) 0:03:19.108 ********** 2025-05-31 21:03:26.663729 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:03:26.663735 | orchestrator | changed: [testbed-node-1] 2025-05-31 21:03:26.663742 | orchestrator | changed: [testbed-node-2] 2025-05-31 21:03:26.663748 | orchestrator | 2025-05-31 21:03:26.663754 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-05-31 21:03:26.663761 | orchestrator | Saturday 31 May 2025 20:55:42 +0000 (0:00:01.876) 0:03:20.985 ********** 2025-05-31 21:03:26.663768 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-31 21:03:26.663774 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-31 21:03:26.663781 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-31 21:03:26.663787 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.663794 | orchestrator | 2025-05-31 21:03:26.663800 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-05-31 21:03:26.663807 | orchestrator | Saturday 31 May 2025 20:55:43 +0000 (0:00:00.689) 0:03:21.675 ********** 2025-05-31 21:03:26.663813 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:03:26.663820 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:03:26.663827 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:03:26.663833 | orchestrator | 2025-05-31 21:03:26.663840 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-05-31 21:03:26.663846 | orchestrator | Saturday 31 May 2025 20:55:43 +0000 (0:00:00.355) 0:03:22.030 ********** 2025-05-31 21:03:26.663853 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.663872 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.663879 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.663885 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 21:03:26.663892 | orchestrator | 2025-05-31 21:03:26.663899 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-05-31 21:03:26.663905 | orchestrator | Saturday 31 May 2025 20:55:44 +0000 (0:00:01.008) 0:03:23.039 ********** 2025-05-31 21:03:26.663912 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-31 21:03:26.663918 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-31 21:03:26.663930 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-31 21:03:26.663936 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.663943 | orchestrator | 2025-05-31 21:03:26.663949 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-05-31 21:03:26.663956 | orchestrator | Saturday 31 May 2025 20:55:45 +0000 (0:00:00.403) 0:03:23.442 ********** 2025-05-31 21:03:26.663962 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.663969 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.663975 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.663982 | orchestrator | 2025-05-31 21:03:26.663988 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-05-31 21:03:26.663995 | orchestrator | Saturday 31 May 2025 20:55:45 +0000 (0:00:00.379) 0:03:23.822 ********** 2025-05-31 21:03:26.664002 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.664008 | orchestrator | 2025-05-31 21:03:26.664015 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-05-31 21:03:26.664022 | orchestrator | Saturday 31 May 2025 20:55:45 +0000 (0:00:00.307) 0:03:24.129 ********** 2025-05-31 21:03:26.664028 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.664035 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.664041 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.664048 | orchestrator | 2025-05-31 21:03:26.664058 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-05-31 21:03:26.664064 | orchestrator | Saturday 31 May 2025 20:55:46 +0000 (0:00:00.367) 0:03:24.497 ********** 2025-05-31 21:03:26.664071 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.664078 | orchestrator | 2025-05-31 21:03:26.664084 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-05-31 21:03:26.664091 | orchestrator | Saturday 31 May 2025 20:55:46 +0000 (0:00:00.244) 0:03:24.741 ********** 2025-05-31 21:03:26.664097 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.664104 | orchestrator | 2025-05-31 21:03:26.664111 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-05-31 21:03:26.664117 | orchestrator | Saturday 31 May 2025 20:55:46 +0000 (0:00:00.258) 0:03:24.999 ********** 2025-05-31 21:03:26.664124 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.664130 | orchestrator | 2025-05-31 21:03:26.664137 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-05-31 21:03:26.664144 | orchestrator | Saturday 31 May 2025 20:55:46 +0000 (0:00:00.360) 0:03:25.360 ********** 2025-05-31 21:03:26.664150 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.664157 | orchestrator | 2025-05-31 21:03:26.664163 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-05-31 21:03:26.664170 | orchestrator | Saturday 31 May 2025 20:55:47 +0000 (0:00:00.267) 0:03:25.628 ********** 2025-05-31 21:03:26.664177 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.664183 | orchestrator | 2025-05-31 21:03:26.664190 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-05-31 21:03:26.664196 | orchestrator | Saturday 31 May 2025 20:55:47 +0000 (0:00:00.226) 0:03:25.854 ********** 2025-05-31 21:03:26.664203 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-31 21:03:26.664210 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-31 21:03:26.664216 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-31 21:03:26.664223 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.664229 | orchestrator | 2025-05-31 21:03:26.664236 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-05-31 21:03:26.664242 | orchestrator | Saturday 31 May 2025 20:55:47 +0000 (0:00:00.426) 0:03:26.280 ********** 2025-05-31 21:03:26.664249 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.664256 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.664262 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.664269 | orchestrator | 2025-05-31 21:03:26.664294 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-05-31 21:03:26.664306 | orchestrator | Saturday 31 May 2025 20:55:48 +0000 (0:00:00.327) 0:03:26.608 ********** 2025-05-31 21:03:26.664313 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.664320 | orchestrator | 2025-05-31 21:03:26.664326 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-05-31 21:03:26.664333 | orchestrator | Saturday 31 May 2025 20:55:48 +0000 (0:00:00.232) 0:03:26.841 ********** 2025-05-31 21:03:26.664340 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.664346 | orchestrator | 2025-05-31 21:03:26.664353 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-05-31 21:03:26.664359 | orchestrator | Saturday 31 May 2025 20:55:48 +0000 (0:00:00.216) 0:03:27.058 ********** 2025-05-31 21:03:26.664366 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.664372 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.664379 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.664386 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 21:03:26.664392 | orchestrator | 2025-05-31 21:03:26.664399 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-05-31 21:03:26.664406 | orchestrator | Saturday 31 May 2025 20:55:49 +0000 (0:00:01.067) 0:03:28.125 ********** 2025-05-31 21:03:26.664412 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:03:26.664419 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:03:26.664426 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:03:26.664432 | orchestrator | 2025-05-31 21:03:26.664439 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-05-31 21:03:26.664446 | orchestrator | Saturday 31 May 2025 20:55:50 +0000 (0:00:00.336) 0:03:28.462 ********** 2025-05-31 21:03:26.664452 | orchestrator | changed: [testbed-node-3] 2025-05-31 21:03:26.664459 | orchestrator | changed: [testbed-node-4] 2025-05-31 21:03:26.664466 | orchestrator | changed: [testbed-node-5] 2025-05-31 21:03:26.664472 | orchestrator | 2025-05-31 21:03:26.664479 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-05-31 21:03:26.664485 | orchestrator | Saturday 31 May 2025 20:55:51 +0000 (0:00:01.245) 0:03:29.707 ********** 2025-05-31 21:03:26.664492 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-31 21:03:26.664498 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-31 21:03:26.664505 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-31 21:03:26.664511 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.664518 | orchestrator | 2025-05-31 21:03:26.664524 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-05-31 21:03:26.664531 | orchestrator | Saturday 31 May 2025 20:55:52 +0000 (0:00:01.112) 0:03:30.819 ********** 2025-05-31 21:03:26.664538 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:03:26.664544 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:03:26.664551 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:03:26.664557 | orchestrator | 2025-05-31 21:03:26.664564 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-05-31 21:03:26.664570 | orchestrator | Saturday 31 May 2025 20:55:52 +0000 (0:00:00.378) 0:03:31.198 ********** 2025-05-31 21:03:26.664577 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.664583 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.664590 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.664597 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 21:03:26.664603 | orchestrator | 2025-05-31 21:03:26.664613 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-05-31 21:03:26.664620 | orchestrator | Saturday 31 May 2025 20:55:53 +0000 (0:00:01.093) 0:03:32.292 ********** 2025-05-31 21:03:26.664627 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:03:26.664633 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:03:26.664640 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:03:26.664650 | orchestrator | 2025-05-31 21:03:26.664657 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-05-31 21:03:26.664664 | orchestrator | Saturday 31 May 2025 20:55:54 +0000 (0:00:00.497) 0:03:32.790 ********** 2025-05-31 21:03:26.664670 | orchestrator | changed: [testbed-node-3] 2025-05-31 21:03:26.664677 | orchestrator | changed: [testbed-node-4] 2025-05-31 21:03:26.664683 | orchestrator | changed: [testbed-node-5] 2025-05-31 21:03:26.664690 | orchestrator | 2025-05-31 21:03:26.664697 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-05-31 21:03:26.664703 | orchestrator | Saturday 31 May 2025 20:55:55 +0000 (0:00:01.394) 0:03:34.184 ********** 2025-05-31 21:03:26.664710 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-31 21:03:26.664717 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-31 21:03:26.664723 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-31 21:03:26.664730 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.664736 | orchestrator | 2025-05-31 21:03:26.664743 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-05-31 21:03:26.664750 | orchestrator | Saturday 31 May 2025 20:55:56 +0000 (0:00:00.719) 0:03:34.904 ********** 2025-05-31 21:03:26.664756 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:03:26.664763 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:03:26.664769 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:03:26.664776 | orchestrator | 2025-05-31 21:03:26.664782 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2025-05-31 21:03:26.664789 | orchestrator | Saturday 31 May 2025 20:55:56 +0000 (0:00:00.257) 0:03:35.161 ********** 2025-05-31 21:03:26.664796 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.664802 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.664809 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.664815 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.664822 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.664828 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.664835 | orchestrator | 2025-05-31 21:03:26.664841 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-05-31 21:03:26.664848 | orchestrator | Saturday 31 May 2025 20:55:57 +0000 (0:00:00.588) 0:03:35.750 ********** 2025-05-31 21:03:26.664913 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.664923 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.664930 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.664936 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 21:03:26.664943 | orchestrator | 2025-05-31 21:03:26.664950 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-05-31 21:03:26.664956 | orchestrator | Saturday 31 May 2025 20:55:58 +0000 (0:00:00.843) 0:03:36.594 ********** 2025-05-31 21:03:26.664963 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:03:26.664970 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:03:26.664976 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:03:26.664983 | orchestrator | 2025-05-31 21:03:26.664989 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-05-31 21:03:26.664996 | orchestrator | Saturday 31 May 2025 20:55:58 +0000 (0:00:00.284) 0:03:36.878 ********** 2025-05-31 21:03:26.665003 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:03:26.665009 | orchestrator | changed: [testbed-node-1] 2025-05-31 21:03:26.665016 | orchestrator | changed: [testbed-node-2] 2025-05-31 21:03:26.665022 | orchestrator | 2025-05-31 21:03:26.665029 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-05-31 21:03:26.665036 | orchestrator | Saturday 31 May 2025 20:55:59 +0000 (0:00:01.173) 0:03:38.051 ********** 2025-05-31 21:03:26.665042 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-31 21:03:26.665049 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-31 21:03:26.665061 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-31 21:03:26.665067 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.665074 | orchestrator | 2025-05-31 21:03:26.665080 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-05-31 21:03:26.665087 | orchestrator | Saturday 31 May 2025 20:56:00 +0000 (0:00:00.800) 0:03:38.852 ********** 2025-05-31 21:03:26.665094 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:03:26.665101 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:03:26.665107 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:03:26.665114 | orchestrator | 2025-05-31 21:03:26.665120 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-05-31 21:03:26.665127 | orchestrator | 2025-05-31 21:03:26.665134 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-05-31 21:03:26.665140 | orchestrator | Saturday 31 May 2025 20:56:01 +0000 (0:00:00.705) 0:03:39.558 ********** 2025-05-31 21:03:26.665147 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 21:03:26.665153 | orchestrator | 2025-05-31 21:03:26.665160 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-05-31 21:03:26.665167 | orchestrator | Saturday 31 May 2025 20:56:01 +0000 (0:00:00.522) 0:03:40.080 ********** 2025-05-31 21:03:26.665174 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 21:03:26.665180 | orchestrator | 2025-05-31 21:03:26.665187 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-05-31 21:03:26.665193 | orchestrator | Saturday 31 May 2025 20:56:02 +0000 (0:00:00.776) 0:03:40.857 ********** 2025-05-31 21:03:26.665200 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:03:26.665207 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:03:26.665213 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:03:26.665220 | orchestrator | 2025-05-31 21:03:26.665230 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-05-31 21:03:26.665237 | orchestrator | Saturday 31 May 2025 20:56:03 +0000 (0:00:00.717) 0:03:41.575 ********** 2025-05-31 21:03:26.665244 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.665250 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.665257 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.665263 | orchestrator | 2025-05-31 21:03:26.665270 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-05-31 21:03:26.665277 | orchestrator | Saturday 31 May 2025 20:56:03 +0000 (0:00:00.326) 0:03:41.901 ********** 2025-05-31 21:03:26.665284 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.665290 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.665297 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.665303 | orchestrator | 2025-05-31 21:03:26.665310 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-05-31 21:03:26.665317 | orchestrator | Saturday 31 May 2025 20:56:03 +0000 (0:00:00.310) 0:03:42.211 ********** 2025-05-31 21:03:26.665323 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.665330 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.665337 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.665343 | orchestrator | 2025-05-31 21:03:26.665350 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-05-31 21:03:26.665356 | orchestrator | Saturday 31 May 2025 20:56:04 +0000 (0:00:00.689) 0:03:42.901 ********** 2025-05-31 21:03:26.665363 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:03:26.665370 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:03:26.665376 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:03:26.665382 | orchestrator | 2025-05-31 21:03:26.665388 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-05-31 21:03:26.665394 | orchestrator | Saturday 31 May 2025 20:56:05 +0000 (0:00:00.722) 0:03:43.623 ********** 2025-05-31 21:03:26.665400 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.665413 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.665419 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.665425 | orchestrator | 2025-05-31 21:03:26.665432 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-05-31 21:03:26.665438 | orchestrator | Saturday 31 May 2025 20:56:05 +0000 (0:00:00.336) 0:03:43.960 ********** 2025-05-31 21:03:26.665444 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.665450 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.665456 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.665462 | orchestrator | 2025-05-31 21:03:26.665468 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-05-31 21:03:26.665494 | orchestrator | Saturday 31 May 2025 20:56:05 +0000 (0:00:00.314) 0:03:44.274 ********** 2025-05-31 21:03:26.665502 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:03:26.665508 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:03:26.665515 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:03:26.665531 | orchestrator | 2025-05-31 21:03:26.665537 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-05-31 21:03:26.665543 | orchestrator | Saturday 31 May 2025 20:56:06 +0000 (0:00:01.133) 0:03:45.408 ********** 2025-05-31 21:03:26.665550 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:03:26.665556 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:03:26.665562 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:03:26.665568 | orchestrator | 2025-05-31 21:03:26.665575 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-05-31 21:03:26.665581 | orchestrator | Saturday 31 May 2025 20:56:07 +0000 (0:00:00.770) 0:03:46.178 ********** 2025-05-31 21:03:26.665587 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.665593 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.665600 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.665606 | orchestrator | 2025-05-31 21:03:26.665612 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-05-31 21:03:26.665619 | orchestrator | Saturday 31 May 2025 20:56:08 +0000 (0:00:00.342) 0:03:46.520 ********** 2025-05-31 21:03:26.665625 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:03:26.665631 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:03:26.665637 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:03:26.665643 | orchestrator | 2025-05-31 21:03:26.665650 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-05-31 21:03:26.665656 | orchestrator | Saturday 31 May 2025 20:56:08 +0000 (0:00:00.392) 0:03:46.912 ********** 2025-05-31 21:03:26.665662 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.665669 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.665675 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.665681 | orchestrator | 2025-05-31 21:03:26.665687 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-05-31 21:03:26.665694 | orchestrator | Saturday 31 May 2025 20:56:09 +0000 (0:00:00.617) 0:03:47.530 ********** 2025-05-31 21:03:26.665700 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.665706 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.665712 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.665718 | orchestrator | 2025-05-31 21:03:26.665725 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-05-31 21:03:26.665731 | orchestrator | Saturday 31 May 2025 20:56:09 +0000 (0:00:00.311) 0:03:47.841 ********** 2025-05-31 21:03:26.665737 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.665743 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.665749 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.665756 | orchestrator | 2025-05-31 21:03:26.665762 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-05-31 21:03:26.665768 | orchestrator | Saturday 31 May 2025 20:56:09 +0000 (0:00:00.314) 0:03:48.156 ********** 2025-05-31 21:03:26.665774 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.665780 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.665787 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.665797 | orchestrator | 2025-05-31 21:03:26.665803 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-05-31 21:03:26.665809 | orchestrator | Saturday 31 May 2025 20:56:10 +0000 (0:00:00.286) 0:03:48.443 ********** 2025-05-31 21:03:26.665816 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.665822 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.665828 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.665834 | orchestrator | 2025-05-31 21:03:26.665844 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-05-31 21:03:26.665850 | orchestrator | Saturday 31 May 2025 20:56:10 +0000 (0:00:00.555) 0:03:48.998 ********** 2025-05-31 21:03:26.665872 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:03:26.665879 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:03:26.665885 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:03:26.665891 | orchestrator | 2025-05-31 21:03:26.665897 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-05-31 21:03:26.665903 | orchestrator | Saturday 31 May 2025 20:56:10 +0000 (0:00:00.399) 0:03:49.397 ********** 2025-05-31 21:03:26.665910 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:03:26.665916 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:03:26.665922 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:03:26.665928 | orchestrator | 2025-05-31 21:03:26.665934 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-05-31 21:03:26.665941 | orchestrator | Saturday 31 May 2025 20:56:11 +0000 (0:00:00.384) 0:03:49.782 ********** 2025-05-31 21:03:26.665947 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:03:26.665953 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:03:26.665959 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:03:26.665965 | orchestrator | 2025-05-31 21:03:26.665971 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2025-05-31 21:03:26.665977 | orchestrator | Saturday 31 May 2025 20:56:12 +0000 (0:00:00.910) 0:03:50.692 ********** 2025-05-31 21:03:26.665983 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:03:26.665990 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:03:26.665996 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:03:26.666002 | orchestrator | 2025-05-31 21:03:26.666008 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2025-05-31 21:03:26.666032 | orchestrator | Saturday 31 May 2025 20:56:12 +0000 (0:00:00.422) 0:03:51.114 ********** 2025-05-31 21:03:26.666039 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 21:03:26.666046 | orchestrator | 2025-05-31 21:03:26.666052 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2025-05-31 21:03:26.666058 | orchestrator | Saturday 31 May 2025 20:56:13 +0000 (0:00:00.700) 0:03:51.815 ********** 2025-05-31 21:03:26.666064 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.666071 | orchestrator | 2025-05-31 21:03:26.666077 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2025-05-31 21:03:26.666083 | orchestrator | Saturday 31 May 2025 20:56:13 +0000 (0:00:00.205) 0:03:52.020 ********** 2025-05-31 21:03:26.666089 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-05-31 21:03:26.666095 | orchestrator | 2025-05-31 21:03:26.666124 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2025-05-31 21:03:26.666132 | orchestrator | Saturday 31 May 2025 20:56:14 +0000 (0:00:01.405) 0:03:53.426 ********** 2025-05-31 21:03:26.666138 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:03:26.666144 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:03:26.666150 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:03:26.666156 | orchestrator | 2025-05-31 21:03:26.666162 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2025-05-31 21:03:26.666169 | orchestrator | Saturday 31 May 2025 20:56:15 +0000 (0:00:00.311) 0:03:53.737 ********** 2025-05-31 21:03:26.666175 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:03:26.666181 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:03:26.666187 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:03:26.666199 | orchestrator | 2025-05-31 21:03:26.666205 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2025-05-31 21:03:26.666212 | orchestrator | Saturday 31 May 2025 20:56:15 +0000 (0:00:00.315) 0:03:54.053 ********** 2025-05-31 21:03:26.666218 | orchestrator | changed: [testbed-node-1] 2025-05-31 21:03:26.666224 | orchestrator | changed: [testbed-node-2] 2025-05-31 21:03:26.666230 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:03:26.666236 | orchestrator | 2025-05-31 21:03:26.666242 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2025-05-31 21:03:26.666257 | orchestrator | Saturday 31 May 2025 20:56:16 +0000 (0:00:01.178) 0:03:55.231 ********** 2025-05-31 21:03:26.666264 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:03:26.666270 | orchestrator | changed: [testbed-node-1] 2025-05-31 21:03:26.666276 | orchestrator | changed: [testbed-node-2] 2025-05-31 21:03:26.666282 | orchestrator | 2025-05-31 21:03:26.666289 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2025-05-31 21:03:26.666295 | orchestrator | Saturday 31 May 2025 20:56:17 +0000 (0:00:00.963) 0:03:56.194 ********** 2025-05-31 21:03:26.666301 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:03:26.666307 | orchestrator | changed: [testbed-node-1] 2025-05-31 21:03:26.666313 | orchestrator | changed: [testbed-node-2] 2025-05-31 21:03:26.666319 | orchestrator | 2025-05-31 21:03:26.666325 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2025-05-31 21:03:26.666332 | orchestrator | Saturday 31 May 2025 20:56:18 +0000 (0:00:00.699) 0:03:56.894 ********** 2025-05-31 21:03:26.666338 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:03:26.666344 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:03:26.666350 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:03:26.666356 | orchestrator | 2025-05-31 21:03:26.666362 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2025-05-31 21:03:26.666368 | orchestrator | Saturday 31 May 2025 20:56:19 +0000 (0:00:00.695) 0:03:57.589 ********** 2025-05-31 21:03:26.666374 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:03:26.666380 | orchestrator | 2025-05-31 21:03:26.666387 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2025-05-31 21:03:26.666393 | orchestrator | Saturday 31 May 2025 20:56:20 +0000 (0:00:01.207) 0:03:58.796 ********** 2025-05-31 21:03:26.666399 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:03:26.666405 | orchestrator | 2025-05-31 21:03:26.666411 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2025-05-31 21:03:26.666418 | orchestrator | Saturday 31 May 2025 20:56:20 +0000 (0:00:00.630) 0:03:59.427 ********** 2025-05-31 21:03:26.666424 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-31 21:03:26.666430 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-31 21:03:26.666436 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-31 21:03:26.666446 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-31 21:03:26.666452 | orchestrator | ok: [testbed-node-1] => (item=None) 2025-05-31 21:03:26.666459 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-31 21:03:26.666465 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-31 21:03:26.666471 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2025-05-31 21:03:26.666477 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-31 21:03:26.666483 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2025-05-31 21:03:26.666489 | orchestrator | ok: [testbed-node-2] => (item=None) 2025-05-31 21:03:26.666496 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2025-05-31 21:03:26.666502 | orchestrator | 2025-05-31 21:03:26.666508 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2025-05-31 21:03:26.666514 | orchestrator | Saturday 31 May 2025 20:56:24 +0000 (0:00:03.291) 0:04:02.719 ********** 2025-05-31 21:03:26.666521 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:03:26.666531 | orchestrator | changed: [testbed-node-1] 2025-05-31 21:03:26.666537 | orchestrator | changed: [testbed-node-2] 2025-05-31 21:03:26.666543 | orchestrator | 2025-05-31 21:03:26.666550 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2025-05-31 21:03:26.666556 | orchestrator | Saturday 31 May 2025 20:56:25 +0000 (0:00:01.483) 0:04:04.203 ********** 2025-05-31 21:03:26.666562 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:03:26.666568 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:03:26.666574 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:03:26.666580 | orchestrator | 2025-05-31 21:03:26.666586 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2025-05-31 21:03:26.666593 | orchestrator | Saturday 31 May 2025 20:56:26 +0000 (0:00:00.267) 0:04:04.470 ********** 2025-05-31 21:03:26.666599 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:03:26.666605 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:03:26.666611 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:03:26.666617 | orchestrator | 2025-05-31 21:03:26.666623 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2025-05-31 21:03:26.666629 | orchestrator | Saturday 31 May 2025 20:56:26 +0000 (0:00:00.283) 0:04:04.754 ********** 2025-05-31 21:03:26.666636 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:03:26.666642 | orchestrator | changed: [testbed-node-1] 2025-05-31 21:03:26.666648 | orchestrator | changed: [testbed-node-2] 2025-05-31 21:03:26.666654 | orchestrator | 2025-05-31 21:03:26.666660 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2025-05-31 21:03:26.666687 | orchestrator | Saturday 31 May 2025 20:56:27 +0000 (0:00:01.665) 0:04:06.420 ********** 2025-05-31 21:03:26.666694 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:03:26.666700 | orchestrator | changed: [testbed-node-1] 2025-05-31 21:03:26.666706 | orchestrator | changed: [testbed-node-2] 2025-05-31 21:03:26.666712 | orchestrator | 2025-05-31 21:03:26.666718 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2025-05-31 21:03:26.666724 | orchestrator | Saturday 31 May 2025 20:56:29 +0000 (0:00:01.588) 0:04:08.008 ********** 2025-05-31 21:03:26.666730 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.666736 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.666742 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.666749 | orchestrator | 2025-05-31 21:03:26.666755 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2025-05-31 21:03:26.666761 | orchestrator | Saturday 31 May 2025 20:56:29 +0000 (0:00:00.403) 0:04:08.412 ********** 2025-05-31 21:03:26.666767 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 21:03:26.666773 | orchestrator | 2025-05-31 21:03:26.666779 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2025-05-31 21:03:26.666785 | orchestrator | Saturday 31 May 2025 20:56:30 +0000 (0:00:00.620) 0:04:09.032 ********** 2025-05-31 21:03:26.666791 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.666797 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.666803 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.666809 | orchestrator | 2025-05-31 21:03:26.666816 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2025-05-31 21:03:26.666822 | orchestrator | Saturday 31 May 2025 20:56:31 +0000 (0:00:00.458) 0:04:09.491 ********** 2025-05-31 21:03:26.666828 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.666834 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.666840 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.666846 | orchestrator | 2025-05-31 21:03:26.666852 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2025-05-31 21:03:26.666898 | orchestrator | Saturday 31 May 2025 20:56:31 +0000 (0:00:00.254) 0:04:09.746 ********** 2025-05-31 21:03:26.666905 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 21:03:26.666911 | orchestrator | 2025-05-31 21:03:26.666922 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2025-05-31 21:03:26.666929 | orchestrator | Saturday 31 May 2025 20:56:31 +0000 (0:00:00.453) 0:04:10.199 ********** 2025-05-31 21:03:26.666935 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:03:26.666941 | orchestrator | changed: [testbed-node-1] 2025-05-31 21:03:26.666947 | orchestrator | changed: [testbed-node-2] 2025-05-31 21:03:26.666953 | orchestrator | 2025-05-31 21:03:26.666960 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2025-05-31 21:03:26.666966 | orchestrator | Saturday 31 May 2025 20:56:33 +0000 (0:00:01.994) 0:04:12.194 ********** 2025-05-31 21:03:26.666972 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:03:26.666978 | orchestrator | changed: [testbed-node-1] 2025-05-31 21:03:26.666984 | orchestrator | changed: [testbed-node-2] 2025-05-31 21:03:26.666990 | orchestrator | 2025-05-31 21:03:26.666997 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2025-05-31 21:03:26.667003 | orchestrator | Saturday 31 May 2025 20:56:34 +0000 (0:00:01.199) 0:04:13.394 ********** 2025-05-31 21:03:26.667010 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:03:26.667016 | orchestrator | changed: [testbed-node-1] 2025-05-31 21:03:26.667022 | orchestrator | changed: [testbed-node-2] 2025-05-31 21:03:26.667028 | orchestrator | 2025-05-31 21:03:26.667038 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2025-05-31 21:03:26.667044 | orchestrator | Saturday 31 May 2025 20:56:36 +0000 (0:00:01.752) 0:04:15.147 ********** 2025-05-31 21:03:26.667050 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:03:26.667056 | orchestrator | changed: [testbed-node-1] 2025-05-31 21:03:26.667063 | orchestrator | changed: [testbed-node-2] 2025-05-31 21:03:26.667069 | orchestrator | 2025-05-31 21:03:26.667075 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2025-05-31 21:03:26.667081 | orchestrator | Saturday 31 May 2025 20:56:38 +0000 (0:00:01.952) 0:04:17.099 ********** 2025-05-31 21:03:26.667087 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 21:03:26.667093 | orchestrator | 2025-05-31 21:03:26.667100 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2025-05-31 21:03:26.667106 | orchestrator | Saturday 31 May 2025 20:56:39 +0000 (0:00:00.735) 0:04:17.834 ********** 2025-05-31 21:03:26.667112 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2025-05-31 21:03:26.667118 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:03:26.667124 | orchestrator | 2025-05-31 21:03:26.667130 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2025-05-31 21:03:26.667136 | orchestrator | Saturday 31 May 2025 20:57:01 +0000 (0:00:21.937) 0:04:39.771 ********** 2025-05-31 21:03:26.667142 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:03:26.667149 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:03:26.667155 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:03:26.667161 | orchestrator | 2025-05-31 21:03:26.667167 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2025-05-31 21:03:26.667173 | orchestrator | Saturday 31 May 2025 20:57:11 +0000 (0:00:09.975) 0:04:49.747 ********** 2025-05-31 21:03:26.667179 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.667185 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.667191 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.667197 | orchestrator | 2025-05-31 21:03:26.667203 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2025-05-31 21:03:26.667210 | orchestrator | Saturday 31 May 2025 20:57:11 +0000 (0:00:00.557) 0:04:50.304 ********** 2025-05-31 21:03:26.667239 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__ceb043b576c20365eb09a2bd68e1d7d0d6ebeee6'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2025-05-31 21:03:26.667253 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__ceb043b576c20365eb09a2bd68e1d7d0d6ebeee6'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2025-05-31 21:03:26.667260 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__ceb043b576c20365eb09a2bd68e1d7d0d6ebeee6'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2025-05-31 21:03:26.667268 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__ceb043b576c20365eb09a2bd68e1d7d0d6ebeee6'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2025-05-31 21:03:26.667275 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__ceb043b576c20365eb09a2bd68e1d7d0d6ebeee6'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2025-05-31 21:03:26.667282 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__ceb043b576c20365eb09a2bd68e1d7d0d6ebeee6'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__ceb043b576c20365eb09a2bd68e1d7d0d6ebeee6'}])  2025-05-31 21:03:26.667290 | orchestrator | 2025-05-31 21:03:26.667296 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-05-31 21:03:26.667302 | orchestrator | Saturday 31 May 2025 20:57:26 +0000 (0:00:14.777) 0:05:05.081 ********** 2025-05-31 21:03:26.667309 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.667318 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.667324 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.667330 | orchestrator | 2025-05-31 21:03:26.667336 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-05-31 21:03:26.667343 | orchestrator | Saturday 31 May 2025 20:57:27 +0000 (0:00:00.357) 0:05:05.439 ********** 2025-05-31 21:03:26.667349 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 21:03:26.667355 | orchestrator | 2025-05-31 21:03:26.667361 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-05-31 21:03:26.667367 | orchestrator | Saturday 31 May 2025 20:57:27 +0000 (0:00:00.834) 0:05:06.274 ********** 2025-05-31 21:03:26.667373 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:03:26.667380 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:03:26.667386 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:03:26.667392 | orchestrator | 2025-05-31 21:03:26.667398 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-05-31 21:03:26.667404 | orchestrator | Saturday 31 May 2025 20:57:28 +0000 (0:00:00.343) 0:05:06.618 ********** 2025-05-31 21:03:26.667410 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.667416 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.667422 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.667428 | orchestrator | 2025-05-31 21:03:26.667433 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-05-31 21:03:26.667438 | orchestrator | Saturday 31 May 2025 20:57:28 +0000 (0:00:00.373) 0:05:06.991 ********** 2025-05-31 21:03:26.667448 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-31 21:03:26.667454 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-31 21:03:26.667459 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-31 21:03:26.667464 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.667470 | orchestrator | 2025-05-31 21:03:26.667475 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-05-31 21:03:26.667481 | orchestrator | Saturday 31 May 2025 20:57:29 +0000 (0:00:00.882) 0:05:07.874 ********** 2025-05-31 21:03:26.667486 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:03:26.667491 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:03:26.667497 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:03:26.667502 | orchestrator | 2025-05-31 21:03:26.667507 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-05-31 21:03:26.667513 | orchestrator | 2025-05-31 21:03:26.667518 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-05-31 21:03:26.667539 | orchestrator | Saturday 31 May 2025 20:57:30 +0000 (0:00:00.999) 0:05:08.874 ********** 2025-05-31 21:03:26.667546 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 21:03:26.667551 | orchestrator | 2025-05-31 21:03:26.667557 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-05-31 21:03:26.667562 | orchestrator | Saturday 31 May 2025 20:57:31 +0000 (0:00:00.587) 0:05:09.461 ********** 2025-05-31 21:03:26.667567 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 21:03:26.667573 | orchestrator | 2025-05-31 21:03:26.667578 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-05-31 21:03:26.667584 | orchestrator | Saturday 31 May 2025 20:57:31 +0000 (0:00:00.623) 0:05:10.085 ********** 2025-05-31 21:03:26.667589 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:03:26.667595 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:03:26.667600 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:03:26.667605 | orchestrator | 2025-05-31 21:03:26.667611 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-05-31 21:03:26.667616 | orchestrator | Saturday 31 May 2025 20:57:32 +0000 (0:00:00.635) 0:05:10.720 ********** 2025-05-31 21:03:26.667621 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.667627 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.667632 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.667637 | orchestrator | 2025-05-31 21:03:26.667643 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-05-31 21:03:26.667648 | orchestrator | Saturday 31 May 2025 20:57:32 +0000 (0:00:00.303) 0:05:11.024 ********** 2025-05-31 21:03:26.667654 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.667659 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.667664 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.667670 | orchestrator | 2025-05-31 21:03:26.667675 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-05-31 21:03:26.667680 | orchestrator | Saturday 31 May 2025 20:57:33 +0000 (0:00:00.443) 0:05:11.467 ********** 2025-05-31 21:03:26.667686 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.667691 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.667696 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.667702 | orchestrator | 2025-05-31 21:03:26.667707 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-05-31 21:03:26.667712 | orchestrator | Saturday 31 May 2025 20:57:33 +0000 (0:00:00.288) 0:05:11.756 ********** 2025-05-31 21:03:26.667718 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:03:26.667723 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:03:26.667728 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:03:26.667734 | orchestrator | 2025-05-31 21:03:26.667739 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-05-31 21:03:26.667749 | orchestrator | Saturday 31 May 2025 20:57:33 +0000 (0:00:00.663) 0:05:12.420 ********** 2025-05-31 21:03:26.667754 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.667759 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.667765 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.667770 | orchestrator | 2025-05-31 21:03:26.667775 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-05-31 21:03:26.667781 | orchestrator | Saturday 31 May 2025 20:57:34 +0000 (0:00:00.290) 0:05:12.710 ********** 2025-05-31 21:03:26.667786 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.667791 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.667800 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.667805 | orchestrator | 2025-05-31 21:03:26.667810 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-05-31 21:03:26.667816 | orchestrator | Saturday 31 May 2025 20:57:34 +0000 (0:00:00.421) 0:05:13.131 ********** 2025-05-31 21:03:26.667821 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:03:26.667827 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:03:26.667832 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:03:26.667837 | orchestrator | 2025-05-31 21:03:26.667843 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-05-31 21:03:26.667848 | orchestrator | Saturday 31 May 2025 20:57:35 +0000 (0:00:00.688) 0:05:13.820 ********** 2025-05-31 21:03:26.667853 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:03:26.667872 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:03:26.667877 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:03:26.667883 | orchestrator | 2025-05-31 21:03:26.667889 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-05-31 21:03:26.667894 | orchestrator | Saturday 31 May 2025 20:57:36 +0000 (0:00:00.730) 0:05:14.550 ********** 2025-05-31 21:03:26.667900 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.667905 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.667911 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.667916 | orchestrator | 2025-05-31 21:03:26.667922 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-05-31 21:03:26.667927 | orchestrator | Saturday 31 May 2025 20:57:36 +0000 (0:00:00.271) 0:05:14.821 ********** 2025-05-31 21:03:26.667933 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:03:26.667938 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:03:26.667943 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:03:26.667949 | orchestrator | 2025-05-31 21:03:26.667954 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-05-31 21:03:26.667959 | orchestrator | Saturday 31 May 2025 20:57:36 +0000 (0:00:00.478) 0:05:15.300 ********** 2025-05-31 21:03:26.667964 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.667970 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.667975 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.667980 | orchestrator | 2025-05-31 21:03:26.667986 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-05-31 21:03:26.667991 | orchestrator | Saturday 31 May 2025 20:57:37 +0000 (0:00:00.278) 0:05:15.578 ********** 2025-05-31 21:03:26.667996 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.668002 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.668007 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.668012 | orchestrator | 2025-05-31 21:03:26.668018 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-05-31 21:03:26.668040 | orchestrator | Saturday 31 May 2025 20:57:37 +0000 (0:00:00.275) 0:05:15.854 ********** 2025-05-31 21:03:26.668046 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.668051 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.668057 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.668062 | orchestrator | 2025-05-31 21:03:26.668067 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-05-31 21:03:26.668077 | orchestrator | Saturday 31 May 2025 20:57:37 +0000 (0:00:00.275) 0:05:16.130 ********** 2025-05-31 21:03:26.668082 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.668088 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.668093 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.668098 | orchestrator | 2025-05-31 21:03:26.668104 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-05-31 21:03:26.668109 | orchestrator | Saturday 31 May 2025 20:57:38 +0000 (0:00:00.548) 0:05:16.679 ********** 2025-05-31 21:03:26.668115 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.668120 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.668125 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.668131 | orchestrator | 2025-05-31 21:03:26.668136 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-05-31 21:03:26.668142 | orchestrator | Saturday 31 May 2025 20:57:38 +0000 (0:00:00.328) 0:05:17.008 ********** 2025-05-31 21:03:26.668147 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:03:26.668153 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:03:26.668158 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:03:26.668163 | orchestrator | 2025-05-31 21:03:26.668169 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-05-31 21:03:26.668174 | orchestrator | Saturday 31 May 2025 20:57:38 +0000 (0:00:00.368) 0:05:17.376 ********** 2025-05-31 21:03:26.668180 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:03:26.668185 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:03:26.668190 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:03:26.668195 | orchestrator | 2025-05-31 21:03:26.668201 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-05-31 21:03:26.668206 | orchestrator | Saturday 31 May 2025 20:57:39 +0000 (0:00:00.413) 0:05:17.789 ********** 2025-05-31 21:03:26.668212 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:03:26.668217 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:03:26.668222 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:03:26.668228 | orchestrator | 2025-05-31 21:03:26.668234 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2025-05-31 21:03:26.668239 | orchestrator | Saturday 31 May 2025 20:57:40 +0000 (0:00:00.889) 0:05:18.679 ********** 2025-05-31 21:03:26.668244 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-31 21:03:26.668250 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-31 21:03:26.668255 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-31 21:03:26.668260 | orchestrator | 2025-05-31 21:03:26.668266 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2025-05-31 21:03:26.668271 | orchestrator | Saturday 31 May 2025 20:57:40 +0000 (0:00:00.734) 0:05:19.413 ********** 2025-05-31 21:03:26.668277 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 21:03:26.668282 | orchestrator | 2025-05-31 21:03:26.668288 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2025-05-31 21:03:26.668293 | orchestrator | Saturday 31 May 2025 20:57:41 +0000 (0:00:00.529) 0:05:19.943 ********** 2025-05-31 21:03:26.668301 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:03:26.668307 | orchestrator | changed: [testbed-node-1] 2025-05-31 21:03:26.668312 | orchestrator | changed: [testbed-node-2] 2025-05-31 21:03:26.668318 | orchestrator | 2025-05-31 21:03:26.668323 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2025-05-31 21:03:26.668328 | orchestrator | Saturday 31 May 2025 20:57:42 +0000 (0:00:00.950) 0:05:20.894 ********** 2025-05-31 21:03:26.668334 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.668339 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.668344 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.668350 | orchestrator | 2025-05-31 21:03:26.668355 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2025-05-31 21:03:26.668360 | orchestrator | Saturday 31 May 2025 20:57:42 +0000 (0:00:00.315) 0:05:21.210 ********** 2025-05-31 21:03:26.668371 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-31 21:03:26.668377 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-31 21:03:26.668382 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-31 21:03:26.668388 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-05-31 21:03:26.668393 | orchestrator | 2025-05-31 21:03:26.668398 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2025-05-31 21:03:26.668404 | orchestrator | Saturday 31 May 2025 20:57:53 +0000 (0:00:10.384) 0:05:31.594 ********** 2025-05-31 21:03:26.668409 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:03:26.668415 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:03:26.668420 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:03:26.668425 | orchestrator | 2025-05-31 21:03:26.668431 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2025-05-31 21:03:26.668436 | orchestrator | Saturday 31 May 2025 20:57:53 +0000 (0:00:00.371) 0:05:31.966 ********** 2025-05-31 21:03:26.668441 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-05-31 21:03:26.668447 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-05-31 21:03:26.668452 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-05-31 21:03:26.668457 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-05-31 21:03:26.668463 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-31 21:03:26.668468 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-31 21:03:26.668474 | orchestrator | 2025-05-31 21:03:26.668479 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2025-05-31 21:03:26.668484 | orchestrator | Saturday 31 May 2025 20:57:56 +0000 (0:00:02.883) 0:05:34.849 ********** 2025-05-31 21:03:26.668506 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-05-31 21:03:26.668512 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-05-31 21:03:26.668517 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-05-31 21:03:26.668523 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-31 21:03:26.668528 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-05-31 21:03:26.668533 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-05-31 21:03:26.668538 | orchestrator | 2025-05-31 21:03:26.668544 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2025-05-31 21:03:26.668549 | orchestrator | Saturday 31 May 2025 20:57:57 +0000 (0:00:01.196) 0:05:36.046 ********** 2025-05-31 21:03:26.668554 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:03:26.668560 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:03:26.668565 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:03:26.668570 | orchestrator | 2025-05-31 21:03:26.668576 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2025-05-31 21:03:26.668581 | orchestrator | Saturday 31 May 2025 20:57:58 +0000 (0:00:00.675) 0:05:36.722 ********** 2025-05-31 21:03:26.668586 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.668592 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.668597 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.668602 | orchestrator | 2025-05-31 21:03:26.668608 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2025-05-31 21:03:26.668613 | orchestrator | Saturday 31 May 2025 20:57:58 +0000 (0:00:00.295) 0:05:37.017 ********** 2025-05-31 21:03:26.668619 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.668624 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.668629 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.668634 | orchestrator | 2025-05-31 21:03:26.668640 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2025-05-31 21:03:26.668645 | orchestrator | Saturday 31 May 2025 20:57:58 +0000 (0:00:00.311) 0:05:37.329 ********** 2025-05-31 21:03:26.668650 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 21:03:26.668659 | orchestrator | 2025-05-31 21:03:26.668665 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2025-05-31 21:03:26.668670 | orchestrator | Saturday 31 May 2025 20:57:59 +0000 (0:00:00.849) 0:05:38.178 ********** 2025-05-31 21:03:26.668676 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.668681 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.668686 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.668692 | orchestrator | 2025-05-31 21:03:26.668697 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2025-05-31 21:03:26.668702 | orchestrator | Saturday 31 May 2025 20:58:00 +0000 (0:00:00.342) 0:05:38.521 ********** 2025-05-31 21:03:26.668708 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.668713 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.668719 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.668724 | orchestrator | 2025-05-31 21:03:26.668729 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2025-05-31 21:03:26.668735 | orchestrator | Saturday 31 May 2025 20:58:00 +0000 (0:00:00.349) 0:05:38.870 ********** 2025-05-31 21:03:26.668740 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 21:03:26.668746 | orchestrator | 2025-05-31 21:03:26.668751 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2025-05-31 21:03:26.668759 | orchestrator | Saturday 31 May 2025 20:58:01 +0000 (0:00:00.954) 0:05:39.825 ********** 2025-05-31 21:03:26.668765 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:03:26.668770 | orchestrator | changed: [testbed-node-1] 2025-05-31 21:03:26.668776 | orchestrator | changed: [testbed-node-2] 2025-05-31 21:03:26.668781 | orchestrator | 2025-05-31 21:03:26.668786 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2025-05-31 21:03:26.668792 | orchestrator | Saturday 31 May 2025 20:58:02 +0000 (0:00:01.339) 0:05:41.164 ********** 2025-05-31 21:03:26.668797 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:03:26.668802 | orchestrator | changed: [testbed-node-1] 2025-05-31 21:03:26.668808 | orchestrator | changed: [testbed-node-2] 2025-05-31 21:03:26.668813 | orchestrator | 2025-05-31 21:03:26.668819 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2025-05-31 21:03:26.668824 | orchestrator | Saturday 31 May 2025 20:58:03 +0000 (0:00:01.161) 0:05:42.325 ********** 2025-05-31 21:03:26.668829 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:03:26.668835 | orchestrator | changed: [testbed-node-1] 2025-05-31 21:03:26.668841 | orchestrator | changed: [testbed-node-2] 2025-05-31 21:03:26.668846 | orchestrator | 2025-05-31 21:03:26.668851 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2025-05-31 21:03:26.668869 | orchestrator | Saturday 31 May 2025 20:58:06 +0000 (0:00:02.166) 0:05:44.492 ********** 2025-05-31 21:03:26.668874 | orchestrator | changed: [testbed-node-1] 2025-05-31 21:03:26.668880 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:03:26.668885 | orchestrator | changed: [testbed-node-2] 2025-05-31 21:03:26.668891 | orchestrator | 2025-05-31 21:03:26.668897 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2025-05-31 21:03:26.668902 | orchestrator | Saturday 31 May 2025 20:58:08 +0000 (0:00:02.007) 0:05:46.499 ********** 2025-05-31 21:03:26.668908 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.668913 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.668918 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-05-31 21:03:26.668924 | orchestrator | 2025-05-31 21:03:26.668929 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2025-05-31 21:03:26.668934 | orchestrator | Saturday 31 May 2025 20:58:08 +0000 (0:00:00.439) 0:05:46.939 ********** 2025-05-31 21:03:26.668940 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2025-05-31 21:03:26.668946 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2025-05-31 21:03:26.668972 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2025-05-31 21:03:26.668979 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2025-05-31 21:03:26.668984 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2025-05-31 21:03:26.668990 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-05-31 21:03:26.668995 | orchestrator | 2025-05-31 21:03:26.669000 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2025-05-31 21:03:26.669006 | orchestrator | Saturday 31 May 2025 20:58:38 +0000 (0:00:29.856) 0:06:16.795 ********** 2025-05-31 21:03:26.669011 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-05-31 21:03:26.669017 | orchestrator | 2025-05-31 21:03:26.669022 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-05-31 21:03:26.669027 | orchestrator | Saturday 31 May 2025 20:58:39 +0000 (0:00:01.584) 0:06:18.380 ********** 2025-05-31 21:03:26.669033 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:03:26.669038 | orchestrator | 2025-05-31 21:03:26.669043 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2025-05-31 21:03:26.669049 | orchestrator | Saturday 31 May 2025 20:58:40 +0000 (0:00:01.000) 0:06:19.381 ********** 2025-05-31 21:03:26.669054 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:03:26.669059 | orchestrator | 2025-05-31 21:03:26.669065 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2025-05-31 21:03:26.669070 | orchestrator | Saturday 31 May 2025 20:58:41 +0000 (0:00:00.182) 0:06:19.563 ********** 2025-05-31 21:03:26.669076 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-05-31 21:03:26.669081 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-05-31 21:03:26.669086 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-05-31 21:03:26.669092 | orchestrator | 2025-05-31 21:03:26.669097 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2025-05-31 21:03:26.669103 | orchestrator | Saturday 31 May 2025 20:58:47 +0000 (0:00:06.311) 0:06:25.874 ********** 2025-05-31 21:03:26.669108 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-05-31 21:03:26.669113 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-05-31 21:03:26.669119 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-05-31 21:03:26.669124 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-05-31 21:03:26.669129 | orchestrator | 2025-05-31 21:03:26.669135 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-05-31 21:03:26.669140 | orchestrator | Saturday 31 May 2025 20:58:52 +0000 (0:00:04.735) 0:06:30.609 ********** 2025-05-31 21:03:26.669145 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:03:26.669151 | orchestrator | changed: [testbed-node-1] 2025-05-31 21:03:26.669156 | orchestrator | changed: [testbed-node-2] 2025-05-31 21:03:26.669162 | orchestrator | 2025-05-31 21:03:26.669167 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-05-31 21:03:26.669172 | orchestrator | Saturday 31 May 2025 20:58:53 +0000 (0:00:01.090) 0:06:31.700 ********** 2025-05-31 21:03:26.669181 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 21:03:26.669187 | orchestrator | 2025-05-31 21:03:26.669192 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-05-31 21:03:26.669198 | orchestrator | Saturday 31 May 2025 20:58:53 +0000 (0:00:00.540) 0:06:32.240 ********** 2025-05-31 21:03:26.669203 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:03:26.669208 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:03:26.669214 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:03:26.669219 | orchestrator | 2025-05-31 21:03:26.669229 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-05-31 21:03:26.669234 | orchestrator | Saturday 31 May 2025 20:58:54 +0000 (0:00:00.324) 0:06:32.565 ********** 2025-05-31 21:03:26.669239 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:03:26.669245 | orchestrator | changed: [testbed-node-1] 2025-05-31 21:03:26.669250 | orchestrator | changed: [testbed-node-2] 2025-05-31 21:03:26.669256 | orchestrator | 2025-05-31 21:03:26.669261 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-05-31 21:03:26.669266 | orchestrator | Saturday 31 May 2025 20:58:55 +0000 (0:00:01.641) 0:06:34.206 ********** 2025-05-31 21:03:26.669272 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-31 21:03:26.669277 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-31 21:03:26.669282 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-31 21:03:26.669288 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.669293 | orchestrator | 2025-05-31 21:03:26.669299 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-05-31 21:03:26.669304 | orchestrator | Saturday 31 May 2025 20:58:56 +0000 (0:00:00.655) 0:06:34.861 ********** 2025-05-31 21:03:26.669309 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:03:26.669315 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:03:26.669320 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:03:26.669325 | orchestrator | 2025-05-31 21:03:26.669331 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-05-31 21:03:26.669336 | orchestrator | 2025-05-31 21:03:26.669341 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-05-31 21:03:26.669347 | orchestrator | Saturday 31 May 2025 20:58:57 +0000 (0:00:00.611) 0:06:35.473 ********** 2025-05-31 21:03:26.669352 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 21:03:26.669358 | orchestrator | 2025-05-31 21:03:26.669363 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-05-31 21:03:26.669388 | orchestrator | Saturday 31 May 2025 20:58:57 +0000 (0:00:00.884) 0:06:36.358 ********** 2025-05-31 21:03:26.669395 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 21:03:26.669400 | orchestrator | 2025-05-31 21:03:26.669405 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-05-31 21:03:26.669411 | orchestrator | Saturday 31 May 2025 20:58:58 +0000 (0:00:00.531) 0:06:36.890 ********** 2025-05-31 21:03:26.669416 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.669422 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.669427 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.669432 | orchestrator | 2025-05-31 21:03:26.669438 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-05-31 21:03:26.669443 | orchestrator | Saturday 31 May 2025 20:58:58 +0000 (0:00:00.310) 0:06:37.201 ********** 2025-05-31 21:03:26.669448 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:03:26.669454 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:03:26.669459 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:03:26.669464 | orchestrator | 2025-05-31 21:03:26.669470 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-05-31 21:03:26.669475 | orchestrator | Saturday 31 May 2025 20:58:59 +0000 (0:00:01.048) 0:06:38.250 ********** 2025-05-31 21:03:26.669480 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:03:26.669486 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:03:26.669491 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:03:26.669496 | orchestrator | 2025-05-31 21:03:26.669501 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-05-31 21:03:26.669507 | orchestrator | Saturday 31 May 2025 20:59:00 +0000 (0:00:00.711) 0:06:38.961 ********** 2025-05-31 21:03:26.669512 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:03:26.669518 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:03:26.669527 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:03:26.669532 | orchestrator | 2025-05-31 21:03:26.669538 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-05-31 21:03:26.669543 | orchestrator | Saturday 31 May 2025 20:59:01 +0000 (0:00:00.654) 0:06:39.616 ********** 2025-05-31 21:03:26.669548 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.669554 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.669559 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.669565 | orchestrator | 2025-05-31 21:03:26.669570 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-05-31 21:03:26.669576 | orchestrator | Saturday 31 May 2025 20:59:01 +0000 (0:00:00.317) 0:06:39.934 ********** 2025-05-31 21:03:26.669581 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.669586 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.669592 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.669597 | orchestrator | 2025-05-31 21:03:26.669602 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-05-31 21:03:26.669608 | orchestrator | Saturday 31 May 2025 20:59:02 +0000 (0:00:00.628) 0:06:40.562 ********** 2025-05-31 21:03:26.669613 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.669619 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.669624 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.669629 | orchestrator | 2025-05-31 21:03:26.669635 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-05-31 21:03:26.669640 | orchestrator | Saturday 31 May 2025 20:59:02 +0000 (0:00:00.344) 0:06:40.907 ********** 2025-05-31 21:03:26.669646 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:03:26.669651 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:03:26.669656 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:03:26.669662 | orchestrator | 2025-05-31 21:03:26.669670 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-05-31 21:03:26.669676 | orchestrator | Saturday 31 May 2025 20:59:03 +0000 (0:00:00.741) 0:06:41.648 ********** 2025-05-31 21:03:26.669681 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:03:26.669687 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:03:26.669692 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:03:26.669697 | orchestrator | 2025-05-31 21:03:26.669703 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-05-31 21:03:26.669708 | orchestrator | Saturday 31 May 2025 20:59:03 +0000 (0:00:00.679) 0:06:42.327 ********** 2025-05-31 21:03:26.669713 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.669719 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.669724 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.669729 | orchestrator | 2025-05-31 21:03:26.669735 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-05-31 21:03:26.669740 | orchestrator | Saturday 31 May 2025 20:59:04 +0000 (0:00:00.654) 0:06:42.982 ********** 2025-05-31 21:03:26.669745 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.669751 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.669756 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.669761 | orchestrator | 2025-05-31 21:03:26.669766 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-05-31 21:03:26.669772 | orchestrator | Saturday 31 May 2025 20:59:04 +0000 (0:00:00.321) 0:06:43.303 ********** 2025-05-31 21:03:26.669777 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:03:26.669783 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:03:26.669788 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:03:26.669793 | orchestrator | 2025-05-31 21:03:26.669798 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-05-31 21:03:26.669804 | orchestrator | Saturday 31 May 2025 20:59:05 +0000 (0:00:00.324) 0:06:43.628 ********** 2025-05-31 21:03:26.669809 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:03:26.669814 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:03:26.669820 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:03:26.669825 | orchestrator | 2025-05-31 21:03:26.669834 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-05-31 21:03:26.669839 | orchestrator | Saturday 31 May 2025 20:59:05 +0000 (0:00:00.332) 0:06:43.961 ********** 2025-05-31 21:03:26.669845 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:03:26.669850 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:03:26.669866 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:03:26.669871 | orchestrator | 2025-05-31 21:03:26.669877 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-05-31 21:03:26.669882 | orchestrator | Saturday 31 May 2025 20:59:06 +0000 (0:00:00.620) 0:06:44.581 ********** 2025-05-31 21:03:26.669890 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.669896 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.669901 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.669907 | orchestrator | 2025-05-31 21:03:26.669912 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-05-31 21:03:26.669917 | orchestrator | Saturday 31 May 2025 20:59:06 +0000 (0:00:00.304) 0:06:44.886 ********** 2025-05-31 21:03:26.669923 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.669928 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.669933 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.669939 | orchestrator | 2025-05-31 21:03:26.669944 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-05-31 21:03:26.669950 | orchestrator | Saturday 31 May 2025 20:59:06 +0000 (0:00:00.304) 0:06:45.190 ********** 2025-05-31 21:03:26.669955 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.669960 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.669965 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.669971 | orchestrator | 2025-05-31 21:03:26.669976 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-05-31 21:03:26.669981 | orchestrator | Saturday 31 May 2025 20:59:07 +0000 (0:00:00.307) 0:06:45.497 ********** 2025-05-31 21:03:26.669987 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:03:26.669992 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:03:26.669997 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:03:26.670002 | orchestrator | 2025-05-31 21:03:26.670008 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-05-31 21:03:26.670030 | orchestrator | Saturday 31 May 2025 20:59:07 +0000 (0:00:00.620) 0:06:46.118 ********** 2025-05-31 21:03:26.670037 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:03:26.670042 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:03:26.670048 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:03:26.670053 | orchestrator | 2025-05-31 21:03:26.670059 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2025-05-31 21:03:26.670064 | orchestrator | Saturday 31 May 2025 20:59:08 +0000 (0:00:00.563) 0:06:46.682 ********** 2025-05-31 21:03:26.670069 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:03:26.670075 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:03:26.670080 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:03:26.670085 | orchestrator | 2025-05-31 21:03:26.670090 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2025-05-31 21:03:26.670096 | orchestrator | Saturday 31 May 2025 20:59:08 +0000 (0:00:00.318) 0:06:47.001 ********** 2025-05-31 21:03:26.670101 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-31 21:03:26.670106 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-31 21:03:26.670112 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-31 21:03:26.670117 | orchestrator | 2025-05-31 21:03:26.670122 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2025-05-31 21:03:26.670128 | orchestrator | Saturday 31 May 2025 20:59:09 +0000 (0:00:00.910) 0:06:47.911 ********** 2025-05-31 21:03:26.670133 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 21:03:26.670139 | orchestrator | 2025-05-31 21:03:26.670144 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2025-05-31 21:03:26.670153 | orchestrator | Saturday 31 May 2025 20:59:10 +0000 (0:00:00.824) 0:06:48.736 ********** 2025-05-31 21:03:26.670159 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.670164 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.670172 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.670178 | orchestrator | 2025-05-31 21:03:26.670183 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2025-05-31 21:03:26.670189 | orchestrator | Saturday 31 May 2025 20:59:10 +0000 (0:00:00.381) 0:06:49.118 ********** 2025-05-31 21:03:26.670194 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.670200 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.670205 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.670210 | orchestrator | 2025-05-31 21:03:26.670216 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2025-05-31 21:03:26.670221 | orchestrator | Saturday 31 May 2025 20:59:10 +0000 (0:00:00.311) 0:06:49.429 ********** 2025-05-31 21:03:26.670226 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:03:26.670232 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:03:26.670237 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:03:26.670242 | orchestrator | 2025-05-31 21:03:26.670248 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2025-05-31 21:03:26.670253 | orchestrator | Saturday 31 May 2025 20:59:11 +0000 (0:00:00.923) 0:06:50.353 ********** 2025-05-31 21:03:26.670258 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:03:26.670264 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:03:26.670269 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:03:26.670275 | orchestrator | 2025-05-31 21:03:26.670280 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2025-05-31 21:03:26.670285 | orchestrator | Saturday 31 May 2025 20:59:12 +0000 (0:00:00.385) 0:06:50.738 ********** 2025-05-31 21:03:26.670291 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-05-31 21:03:26.670296 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-05-31 21:03:26.670301 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-05-31 21:03:26.670307 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-05-31 21:03:26.670312 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-05-31 21:03:26.670317 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-05-31 21:03:26.670323 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-05-31 21:03:26.670333 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-05-31 21:03:26.670339 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-05-31 21:03:26.670344 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-05-31 21:03:26.670349 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-05-31 21:03:26.670355 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-05-31 21:03:26.670360 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-05-31 21:03:26.670365 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-05-31 21:03:26.670371 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-05-31 21:03:26.670376 | orchestrator | 2025-05-31 21:03:26.670382 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2025-05-31 21:03:26.670387 | orchestrator | Saturday 31 May 2025 20:59:14 +0000 (0:00:01.928) 0:06:52.666 ********** 2025-05-31 21:03:26.670392 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.670402 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.670407 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.670412 | orchestrator | 2025-05-31 21:03:26.670418 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2025-05-31 21:03:26.670423 | orchestrator | Saturday 31 May 2025 20:59:14 +0000 (0:00:00.318) 0:06:52.985 ********** 2025-05-31 21:03:26.670428 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 21:03:26.670434 | orchestrator | 2025-05-31 21:03:26.670439 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2025-05-31 21:03:26.670444 | orchestrator | Saturday 31 May 2025 20:59:15 +0000 (0:00:00.859) 0:06:53.845 ********** 2025-05-31 21:03:26.670450 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-05-31 21:03:26.670455 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-05-31 21:03:26.670460 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-05-31 21:03:26.670466 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-05-31 21:03:26.670471 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-05-31 21:03:26.670477 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-05-31 21:03:26.670482 | orchestrator | 2025-05-31 21:03:26.670487 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2025-05-31 21:03:26.670493 | orchestrator | Saturday 31 May 2025 20:59:16 +0000 (0:00:00.960) 0:06:54.805 ********** 2025-05-31 21:03:26.670498 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-31 21:03:26.670503 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-31 21:03:26.670509 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-31 21:03:26.670514 | orchestrator | 2025-05-31 21:03:26.670519 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2025-05-31 21:03:26.670525 | orchestrator | Saturday 31 May 2025 20:59:18 +0000 (0:00:02.068) 0:06:56.874 ********** 2025-05-31 21:03:26.670533 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-31 21:03:26.670539 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-31 21:03:26.670544 | orchestrator | changed: [testbed-node-3] 2025-05-31 21:03:26.670550 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-31 21:03:26.670555 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-31 21:03:26.670560 | orchestrator | changed: [testbed-node-4] 2025-05-31 21:03:26.670565 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-31 21:03:26.670571 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-31 21:03:26.670576 | orchestrator | changed: [testbed-node-5] 2025-05-31 21:03:26.670581 | orchestrator | 2025-05-31 21:03:26.670587 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2025-05-31 21:03:26.670592 | orchestrator | Saturday 31 May 2025 20:59:19 +0000 (0:00:01.330) 0:06:58.204 ********** 2025-05-31 21:03:26.670598 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-31 21:03:26.670603 | orchestrator | 2025-05-31 21:03:26.670608 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2025-05-31 21:03:26.670614 | orchestrator | Saturday 31 May 2025 20:59:21 +0000 (0:00:01.975) 0:07:00.180 ********** 2025-05-31 21:03:26.670619 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 21:03:26.670624 | orchestrator | 2025-05-31 21:03:26.670630 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2025-05-31 21:03:26.670635 | orchestrator | Saturday 31 May 2025 20:59:22 +0000 (0:00:00.523) 0:07:00.703 ********** 2025-05-31 21:03:26.670641 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-7717ad38-094f-5aa6-8c39-f28029f817d5', 'data_vg': 'ceph-7717ad38-094f-5aa6-8c39-f28029f817d5'}) 2025-05-31 21:03:26.670647 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-edfa5e9a-3f1a-54c1-83f4-345bb781a14b', 'data_vg': 'ceph-edfa5e9a-3f1a-54c1-83f4-345bb781a14b'}) 2025-05-31 21:03:26.670656 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-813d0644-8ada-5e52-b3d8-7484365c4567', 'data_vg': 'ceph-813d0644-8ada-5e52-b3d8-7484365c4567'}) 2025-05-31 21:03:26.670665 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-6fa9e552-f12f-547e-b45f-d034b93383af', 'data_vg': 'ceph-6fa9e552-f12f-547e-b45f-d034b93383af'}) 2025-05-31 21:03:26.670670 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-a23536e0-7351-5f09-a3c0-98b1bc7f8fff', 'data_vg': 'ceph-a23536e0-7351-5f09-a3c0-98b1bc7f8fff'}) 2025-05-31 21:03:26.670676 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-b37e5891-99ec-5ce8-8fa7-674876c21edd', 'data_vg': 'ceph-b37e5891-99ec-5ce8-8fa7-674876c21edd'}) 2025-05-31 21:03:26.670681 | orchestrator | 2025-05-31 21:03:26.670687 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2025-05-31 21:03:26.670692 | orchestrator | Saturday 31 May 2025 21:00:06 +0000 (0:00:44.551) 0:07:45.255 ********** 2025-05-31 21:03:26.670697 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.670703 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.670708 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.670713 | orchestrator | 2025-05-31 21:03:26.670719 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2025-05-31 21:03:26.670724 | orchestrator | Saturday 31 May 2025 21:00:07 +0000 (0:00:00.556) 0:07:45.811 ********** 2025-05-31 21:03:26.670729 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 21:03:26.670735 | orchestrator | 2025-05-31 21:03:26.670740 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2025-05-31 21:03:26.670745 | orchestrator | Saturday 31 May 2025 21:00:07 +0000 (0:00:00.521) 0:07:46.333 ********** 2025-05-31 21:03:26.670751 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:03:26.670756 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:03:26.670761 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:03:26.670766 | orchestrator | 2025-05-31 21:03:26.670772 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2025-05-31 21:03:26.670777 | orchestrator | Saturday 31 May 2025 21:00:08 +0000 (0:00:00.626) 0:07:46.959 ********** 2025-05-31 21:03:26.670782 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:03:26.670788 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:03:26.670793 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:03:26.670798 | orchestrator | 2025-05-31 21:03:26.670804 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2025-05-31 21:03:26.670809 | orchestrator | Saturday 31 May 2025 21:00:11 +0000 (0:00:02.692) 0:07:49.652 ********** 2025-05-31 21:03:26.670814 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 21:03:26.670820 | orchestrator | 2025-05-31 21:03:26.670825 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2025-05-31 21:03:26.670830 | orchestrator | Saturday 31 May 2025 21:00:11 +0000 (0:00:00.528) 0:07:50.180 ********** 2025-05-31 21:03:26.670836 | orchestrator | changed: [testbed-node-3] 2025-05-31 21:03:26.670841 | orchestrator | changed: [testbed-node-4] 2025-05-31 21:03:26.670846 | orchestrator | changed: [testbed-node-5] 2025-05-31 21:03:26.670852 | orchestrator | 2025-05-31 21:03:26.670886 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2025-05-31 21:03:26.670892 | orchestrator | Saturday 31 May 2025 21:00:12 +0000 (0:00:01.100) 0:07:51.281 ********** 2025-05-31 21:03:26.670897 | orchestrator | changed: [testbed-node-3] 2025-05-31 21:03:26.670903 | orchestrator | changed: [testbed-node-4] 2025-05-31 21:03:26.670908 | orchestrator | changed: [testbed-node-5] 2025-05-31 21:03:26.670913 | orchestrator | 2025-05-31 21:03:26.670919 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2025-05-31 21:03:26.670924 | orchestrator | Saturday 31 May 2025 21:00:14 +0000 (0:00:01.399) 0:07:52.681 ********** 2025-05-31 21:03:26.670938 | orchestrator | changed: [testbed-node-4] 2025-05-31 21:03:26.670944 | orchestrator | changed: [testbed-node-3] 2025-05-31 21:03:26.670950 | orchestrator | changed: [testbed-node-5] 2025-05-31 21:03:26.670955 | orchestrator | 2025-05-31 21:03:26.670960 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2025-05-31 21:03:26.670966 | orchestrator | Saturday 31 May 2025 21:00:15 +0000 (0:00:01.683) 0:07:54.365 ********** 2025-05-31 21:03:26.670971 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.670976 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.670981 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.670987 | orchestrator | 2025-05-31 21:03:26.670992 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2025-05-31 21:03:26.670997 | orchestrator | Saturday 31 May 2025 21:00:16 +0000 (0:00:00.311) 0:07:54.677 ********** 2025-05-31 21:03:26.671003 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.671008 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.671013 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.671019 | orchestrator | 2025-05-31 21:03:26.671024 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2025-05-31 21:03:26.671029 | orchestrator | Saturday 31 May 2025 21:00:16 +0000 (0:00:00.325) 0:07:55.002 ********** 2025-05-31 21:03:26.671035 | orchestrator | ok: [testbed-node-3] => (item=5) 2025-05-31 21:03:26.671040 | orchestrator | ok: [testbed-node-4] => (item=3) 2025-05-31 21:03:26.671045 | orchestrator | ok: [testbed-node-5] => (item=2) 2025-05-31 21:03:26.671051 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-05-31 21:03:26.671056 | orchestrator | ok: [testbed-node-4] => (item=1) 2025-05-31 21:03:26.671061 | orchestrator | ok: [testbed-node-5] => (item=4) 2025-05-31 21:03:26.671067 | orchestrator | 2025-05-31 21:03:26.671072 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2025-05-31 21:03:26.671077 | orchestrator | Saturday 31 May 2025 21:00:17 +0000 (0:00:01.371) 0:07:56.374 ********** 2025-05-31 21:03:26.671083 | orchestrator | changed: [testbed-node-3] => (item=5) 2025-05-31 21:03:26.671088 | orchestrator | changed: [testbed-node-4] => (item=3) 2025-05-31 21:03:26.671093 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-05-31 21:03:26.671099 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-05-31 21:03:26.671104 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-05-31 21:03:26.671109 | orchestrator | changed: [testbed-node-5] => (item=4) 2025-05-31 21:03:26.671115 | orchestrator | 2025-05-31 21:03:26.671120 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2025-05-31 21:03:26.671128 | orchestrator | Saturday 31 May 2025 21:00:19 +0000 (0:00:02.060) 0:07:58.435 ********** 2025-05-31 21:03:26.671134 | orchestrator | changed: [testbed-node-3] => (item=5) 2025-05-31 21:03:26.671139 | orchestrator | changed: [testbed-node-4] => (item=3) 2025-05-31 21:03:26.671145 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-05-31 21:03:26.671150 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-05-31 21:03:26.671155 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-05-31 21:03:26.671160 | orchestrator | changed: [testbed-node-5] => (item=4) 2025-05-31 21:03:26.671164 | orchestrator | 2025-05-31 21:03:26.671169 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2025-05-31 21:03:26.671174 | orchestrator | Saturday 31 May 2025 21:00:23 +0000 (0:00:03.444) 0:08:01.880 ********** 2025-05-31 21:03:26.671179 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.671183 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.671188 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-05-31 21:03:26.671193 | orchestrator | 2025-05-31 21:03:26.671198 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2025-05-31 21:03:26.671202 | orchestrator | Saturday 31 May 2025 21:00:26 +0000 (0:00:02.833) 0:08:04.713 ********** 2025-05-31 21:03:26.671207 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.671215 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.671220 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2025-05-31 21:03:26.671225 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-05-31 21:03:26.671230 | orchestrator | 2025-05-31 21:03:26.671235 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2025-05-31 21:03:26.671239 | orchestrator | Saturday 31 May 2025 21:00:39 +0000 (0:00:12.858) 0:08:17.571 ********** 2025-05-31 21:03:26.671244 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.671249 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.671253 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.671258 | orchestrator | 2025-05-31 21:03:26.671263 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-05-31 21:03:26.671268 | orchestrator | Saturday 31 May 2025 21:00:39 +0000 (0:00:00.862) 0:08:18.434 ********** 2025-05-31 21:03:26.671273 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.671277 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.671282 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.671287 | orchestrator | 2025-05-31 21:03:26.671292 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-05-31 21:03:26.671297 | orchestrator | Saturday 31 May 2025 21:00:40 +0000 (0:00:00.552) 0:08:18.986 ********** 2025-05-31 21:03:26.671301 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 21:03:26.671306 | orchestrator | 2025-05-31 21:03:26.671311 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-05-31 21:03:26.671316 | orchestrator | Saturday 31 May 2025 21:00:41 +0000 (0:00:00.511) 0:08:19.498 ********** 2025-05-31 21:03:26.671321 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-31 21:03:26.671326 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-31 21:03:26.671330 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-31 21:03:26.671335 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.671340 | orchestrator | 2025-05-31 21:03:26.671345 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-05-31 21:03:26.671352 | orchestrator | Saturday 31 May 2025 21:00:41 +0000 (0:00:00.393) 0:08:19.892 ********** 2025-05-31 21:03:26.671357 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.671362 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.671367 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.671372 | orchestrator | 2025-05-31 21:03:26.671376 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-05-31 21:03:26.671381 | orchestrator | Saturday 31 May 2025 21:00:41 +0000 (0:00:00.328) 0:08:20.220 ********** 2025-05-31 21:03:26.671386 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.671391 | orchestrator | 2025-05-31 21:03:26.671395 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-05-31 21:03:26.671400 | orchestrator | Saturday 31 May 2025 21:00:42 +0000 (0:00:00.227) 0:08:20.448 ********** 2025-05-31 21:03:26.671405 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.671410 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.671415 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.671419 | orchestrator | 2025-05-31 21:03:26.671424 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-05-31 21:03:26.671429 | orchestrator | Saturday 31 May 2025 21:00:42 +0000 (0:00:00.535) 0:08:20.983 ********** 2025-05-31 21:03:26.671434 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.671438 | orchestrator | 2025-05-31 21:03:26.671443 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-05-31 21:03:26.671448 | orchestrator | Saturday 31 May 2025 21:00:42 +0000 (0:00:00.210) 0:08:21.194 ********** 2025-05-31 21:03:26.671453 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.671458 | orchestrator | 2025-05-31 21:03:26.671466 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-05-31 21:03:26.671471 | orchestrator | Saturday 31 May 2025 21:00:43 +0000 (0:00:00.249) 0:08:21.443 ********** 2025-05-31 21:03:26.671476 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.671481 | orchestrator | 2025-05-31 21:03:26.671485 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-05-31 21:03:26.671490 | orchestrator | Saturday 31 May 2025 21:00:43 +0000 (0:00:00.127) 0:08:21.571 ********** 2025-05-31 21:03:26.671495 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.671500 | orchestrator | 2025-05-31 21:03:26.671504 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-05-31 21:03:26.671509 | orchestrator | Saturday 31 May 2025 21:00:43 +0000 (0:00:00.242) 0:08:21.814 ********** 2025-05-31 21:03:26.671514 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.671519 | orchestrator | 2025-05-31 21:03:26.671526 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-05-31 21:03:26.671531 | orchestrator | Saturday 31 May 2025 21:00:43 +0000 (0:00:00.229) 0:08:22.043 ********** 2025-05-31 21:03:26.671536 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-31 21:03:26.671541 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-31 21:03:26.671545 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-31 21:03:26.671550 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.671555 | orchestrator | 2025-05-31 21:03:26.671559 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-05-31 21:03:26.671564 | orchestrator | Saturday 31 May 2025 21:00:43 +0000 (0:00:00.395) 0:08:22.438 ********** 2025-05-31 21:03:26.671569 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.671574 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.671578 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.671583 | orchestrator | 2025-05-31 21:03:26.671588 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-05-31 21:03:26.671593 | orchestrator | Saturday 31 May 2025 21:00:44 +0000 (0:00:00.375) 0:08:22.814 ********** 2025-05-31 21:03:26.671597 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.671602 | orchestrator | 2025-05-31 21:03:26.671607 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-05-31 21:03:26.671612 | orchestrator | Saturday 31 May 2025 21:00:45 +0000 (0:00:00.851) 0:08:23.665 ********** 2025-05-31 21:03:26.671617 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.671621 | orchestrator | 2025-05-31 21:03:26.671626 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-05-31 21:03:26.671631 | orchestrator | 2025-05-31 21:03:26.671636 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-05-31 21:03:26.671641 | orchestrator | Saturday 31 May 2025 21:00:45 +0000 (0:00:00.686) 0:08:24.352 ********** 2025-05-31 21:03:26.671646 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 21:03:26.671651 | orchestrator | 2025-05-31 21:03:26.671656 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-05-31 21:03:26.671660 | orchestrator | Saturday 31 May 2025 21:00:46 +0000 (0:00:01.033) 0:08:25.385 ********** 2025-05-31 21:03:26.671665 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 21:03:26.671670 | orchestrator | 2025-05-31 21:03:26.671675 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-05-31 21:03:26.671679 | orchestrator | Saturday 31 May 2025 21:00:47 +0000 (0:00:00.933) 0:08:26.318 ********** 2025-05-31 21:03:26.671684 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.671689 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.671694 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:03:26.671704 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:03:26.671709 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:03:26.671714 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.671719 | orchestrator | 2025-05-31 21:03:26.671724 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-05-31 21:03:26.671728 | orchestrator | Saturday 31 May 2025 21:00:48 +0000 (0:00:00.685) 0:08:27.004 ********** 2025-05-31 21:03:26.671733 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.671738 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.671745 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.671750 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:03:26.671754 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:03:26.671759 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:03:26.671764 | orchestrator | 2025-05-31 21:03:26.671768 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-05-31 21:03:26.671773 | orchestrator | Saturday 31 May 2025 21:00:49 +0000 (0:00:00.881) 0:08:27.886 ********** 2025-05-31 21:03:26.671778 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.671783 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.671788 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.671792 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:03:26.671797 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:03:26.671802 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:03:26.671807 | orchestrator | 2025-05-31 21:03:26.671811 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-05-31 21:03:26.671816 | orchestrator | Saturday 31 May 2025 21:00:50 +0000 (0:00:01.061) 0:08:28.947 ********** 2025-05-31 21:03:26.671821 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.671826 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.671830 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.671835 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:03:26.671840 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:03:26.671844 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:03:26.671849 | orchestrator | 2025-05-31 21:03:26.671854 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-05-31 21:03:26.671871 | orchestrator | Saturday 31 May 2025 21:00:51 +0000 (0:00:00.966) 0:08:29.914 ********** 2025-05-31 21:03:26.671876 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.671881 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.671885 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:03:26.671890 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:03:26.671895 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:03:26.671900 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.671904 | orchestrator | 2025-05-31 21:03:26.671909 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-05-31 21:03:26.671914 | orchestrator | Saturday 31 May 2025 21:00:52 +0000 (0:00:00.809) 0:08:30.724 ********** 2025-05-31 21:03:26.671919 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.671924 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.671929 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.671933 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.671938 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.671943 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.671947 | orchestrator | 2025-05-31 21:03:26.671955 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-05-31 21:03:26.671960 | orchestrator | Saturday 31 May 2025 21:00:52 +0000 (0:00:00.668) 0:08:31.392 ********** 2025-05-31 21:03:26.671965 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.671970 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.671975 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.671979 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.671984 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.671991 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.671998 | orchestrator | 2025-05-31 21:03:26.672006 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-05-31 21:03:26.672018 | orchestrator | Saturday 31 May 2025 21:00:53 +0000 (0:00:00.859) 0:08:32.252 ********** 2025-05-31 21:03:26.672031 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:03:26.672041 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:03:26.672049 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:03:26.672056 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:03:26.672063 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:03:26.672071 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:03:26.672077 | orchestrator | 2025-05-31 21:03:26.672084 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-05-31 21:03:26.672091 | orchestrator | Saturday 31 May 2025 21:00:54 +0000 (0:00:01.035) 0:08:33.287 ********** 2025-05-31 21:03:26.672099 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:03:26.672106 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:03:26.672113 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:03:26.672121 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:03:26.672128 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:03:26.672135 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:03:26.672142 | orchestrator | 2025-05-31 21:03:26.672150 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-05-31 21:03:26.672157 | orchestrator | Saturday 31 May 2025 21:00:56 +0000 (0:00:01.319) 0:08:34.606 ********** 2025-05-31 21:03:26.672165 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.672173 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.672180 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.672188 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.672195 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.672202 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.672210 | orchestrator | 2025-05-31 21:03:26.672217 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-05-31 21:03:26.672225 | orchestrator | Saturday 31 May 2025 21:00:56 +0000 (0:00:00.549) 0:08:35.156 ********** 2025-05-31 21:03:26.672233 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:03:26.672241 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:03:26.672249 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:03:26.672253 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.672258 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.672263 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.672267 | orchestrator | 2025-05-31 21:03:26.672272 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-05-31 21:03:26.672277 | orchestrator | Saturday 31 May 2025 21:00:57 +0000 (0:00:00.752) 0:08:35.908 ********** 2025-05-31 21:03:26.672282 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.672286 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.672291 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.672296 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:03:26.672300 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:03:26.672305 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:03:26.672310 | orchestrator | 2025-05-31 21:03:26.672314 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-05-31 21:03:26.672319 | orchestrator | Saturday 31 May 2025 21:00:58 +0000 (0:00:00.595) 0:08:36.503 ********** 2025-05-31 21:03:26.672324 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.672333 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.672338 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.672342 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:03:26.672347 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:03:26.672352 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:03:26.672356 | orchestrator | 2025-05-31 21:03:26.672361 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-05-31 21:03:26.672366 | orchestrator | Saturday 31 May 2025 21:00:58 +0000 (0:00:00.778) 0:08:37.282 ********** 2025-05-31 21:03:26.672371 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.672376 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.672386 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.672391 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:03:26.672396 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:03:26.672400 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:03:26.672405 | orchestrator | 2025-05-31 21:03:26.672410 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-05-31 21:03:26.672427 | orchestrator | Saturday 31 May 2025 21:00:59 +0000 (0:00:00.600) 0:08:37.882 ********** 2025-05-31 21:03:26.672431 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.672436 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.672441 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.672445 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.672450 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.672455 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.672459 | orchestrator | 2025-05-31 21:03:26.672464 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-05-31 21:03:26.672469 | orchestrator | Saturday 31 May 2025 21:01:00 +0000 (0:00:00.779) 0:08:38.662 ********** 2025-05-31 21:03:26.672474 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:03:26.672478 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:03:26.672483 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:03:26.672488 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.672492 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.672497 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.672502 | orchestrator | 2025-05-31 21:03:26.672507 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-05-31 21:03:26.672511 | orchestrator | Saturday 31 May 2025 21:01:00 +0000 (0:00:00.625) 0:08:39.287 ********** 2025-05-31 21:03:26.672516 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:03:26.672521 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:03:26.672526 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:03:26.672530 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.672535 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.672540 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.672545 | orchestrator | 2025-05-31 21:03:26.672554 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-05-31 21:03:26.672559 | orchestrator | Saturday 31 May 2025 21:01:01 +0000 (0:00:00.787) 0:08:40.075 ********** 2025-05-31 21:03:26.672564 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:03:26.672569 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:03:26.672573 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:03:26.672578 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:03:26.672583 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:03:26.672587 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:03:26.672592 | orchestrator | 2025-05-31 21:03:26.672597 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-05-31 21:03:26.672602 | orchestrator | Saturday 31 May 2025 21:01:02 +0000 (0:00:00.620) 0:08:40.695 ********** 2025-05-31 21:03:26.672606 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:03:26.672611 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:03:26.672616 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:03:26.672620 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:03:26.672625 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:03:26.672630 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:03:26.672634 | orchestrator | 2025-05-31 21:03:26.672639 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2025-05-31 21:03:26.672644 | orchestrator | Saturday 31 May 2025 21:01:03 +0000 (0:00:01.202) 0:08:41.897 ********** 2025-05-31 21:03:26.672649 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:03:26.672653 | orchestrator | 2025-05-31 21:03:26.672658 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2025-05-31 21:03:26.672663 | orchestrator | Saturday 31 May 2025 21:01:07 +0000 (0:00:03.919) 0:08:45.816 ********** 2025-05-31 21:03:26.672668 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:03:26.672672 | orchestrator | 2025-05-31 21:03:26.672682 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2025-05-31 21:03:26.672686 | orchestrator | Saturday 31 May 2025 21:01:09 +0000 (0:00:01.978) 0:08:47.794 ********** 2025-05-31 21:03:26.672691 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:03:26.672696 | orchestrator | changed: [testbed-node-1] 2025-05-31 21:03:26.672701 | orchestrator | changed: [testbed-node-2] 2025-05-31 21:03:26.672705 | orchestrator | changed: [testbed-node-3] 2025-05-31 21:03:26.672710 | orchestrator | changed: [testbed-node-4] 2025-05-31 21:03:26.672715 | orchestrator | changed: [testbed-node-5] 2025-05-31 21:03:26.672719 | orchestrator | 2025-05-31 21:03:26.672724 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2025-05-31 21:03:26.672729 | orchestrator | Saturday 31 May 2025 21:01:11 +0000 (0:00:01.703) 0:08:49.498 ********** 2025-05-31 21:03:26.672734 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:03:26.672739 | orchestrator | changed: [testbed-node-1] 2025-05-31 21:03:26.672743 | orchestrator | changed: [testbed-node-2] 2025-05-31 21:03:26.672748 | orchestrator | changed: [testbed-node-3] 2025-05-31 21:03:26.672753 | orchestrator | changed: [testbed-node-4] 2025-05-31 21:03:26.672757 | orchestrator | changed: [testbed-node-5] 2025-05-31 21:03:26.672762 | orchestrator | 2025-05-31 21:03:26.672767 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2025-05-31 21:03:26.672772 | orchestrator | Saturday 31 May 2025 21:01:11 +0000 (0:00:00.937) 0:08:50.435 ********** 2025-05-31 21:03:26.672776 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 21:03:26.672782 | orchestrator | 2025-05-31 21:03:26.672786 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2025-05-31 21:03:26.672791 | orchestrator | Saturday 31 May 2025 21:01:13 +0000 (0:00:01.165) 0:08:51.601 ********** 2025-05-31 21:03:26.672796 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:03:26.672803 | orchestrator | changed: [testbed-node-1] 2025-05-31 21:03:26.672808 | orchestrator | changed: [testbed-node-2] 2025-05-31 21:03:26.672812 | orchestrator | changed: [testbed-node-3] 2025-05-31 21:03:26.672817 | orchestrator | changed: [testbed-node-4] 2025-05-31 21:03:26.672822 | orchestrator | changed: [testbed-node-5] 2025-05-31 21:03:26.672827 | orchestrator | 2025-05-31 21:03:26.672831 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2025-05-31 21:03:26.672836 | orchestrator | Saturday 31 May 2025 21:01:14 +0000 (0:00:01.709) 0:08:53.311 ********** 2025-05-31 21:03:26.672841 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:03:26.672845 | orchestrator | changed: [testbed-node-1] 2025-05-31 21:03:26.672850 | orchestrator | changed: [testbed-node-3] 2025-05-31 21:03:26.672889 | orchestrator | changed: [testbed-node-4] 2025-05-31 21:03:26.672894 | orchestrator | changed: [testbed-node-2] 2025-05-31 21:03:26.672899 | orchestrator | changed: [testbed-node-5] 2025-05-31 21:03:26.672904 | orchestrator | 2025-05-31 21:03:26.672909 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2025-05-31 21:03:26.672913 | orchestrator | Saturday 31 May 2025 21:01:18 +0000 (0:00:03.134) 0:08:56.445 ********** 2025-05-31 21:03:26.672918 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 21:03:26.672923 | orchestrator | 2025-05-31 21:03:26.672928 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2025-05-31 21:03:26.672933 | orchestrator | Saturday 31 May 2025 21:01:19 +0000 (0:00:01.255) 0:08:57.701 ********** 2025-05-31 21:03:26.672937 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:03:26.672942 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:03:26.672947 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:03:26.672952 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:03:26.672956 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:03:26.672961 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:03:26.672966 | orchestrator | 2025-05-31 21:03:26.672976 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2025-05-31 21:03:26.672980 | orchestrator | Saturday 31 May 2025 21:01:20 +0000 (0:00:00.786) 0:08:58.487 ********** 2025-05-31 21:03:26.672985 | orchestrator | changed: [testbed-node-3] 2025-05-31 21:03:26.672990 | orchestrator | changed: [testbed-node-1] 2025-05-31 21:03:26.672995 | orchestrator | changed: [testbed-node-4] 2025-05-31 21:03:26.672999 | orchestrator | changed: [testbed-node-2] 2025-05-31 21:03:26.673004 | orchestrator | changed: [testbed-node-5] 2025-05-31 21:03:26.673009 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:03:26.673014 | orchestrator | 2025-05-31 21:03:26.673018 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2025-05-31 21:03:26.673032 | orchestrator | Saturday 31 May 2025 21:01:22 +0000 (0:00:02.594) 0:09:01.081 ********** 2025-05-31 21:03:26.673037 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:03:26.673042 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:03:26.673047 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:03:26.673052 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:03:26.673056 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:03:26.673061 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:03:26.673066 | orchestrator | 2025-05-31 21:03:26.673070 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-05-31 21:03:26.673075 | orchestrator | 2025-05-31 21:03:26.673080 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-05-31 21:03:26.673085 | orchestrator | Saturday 31 May 2025 21:01:23 +0000 (0:00:01.076) 0:09:02.158 ********** 2025-05-31 21:03:26.673090 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 21:03:26.673095 | orchestrator | 2025-05-31 21:03:26.673100 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-05-31 21:03:26.673104 | orchestrator | Saturday 31 May 2025 21:01:24 +0000 (0:00:00.490) 0:09:02.649 ********** 2025-05-31 21:03:26.673109 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 21:03:26.673114 | orchestrator | 2025-05-31 21:03:26.673119 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-05-31 21:03:26.673124 | orchestrator | Saturday 31 May 2025 21:01:24 +0000 (0:00:00.748) 0:09:03.397 ********** 2025-05-31 21:03:26.673129 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.673134 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.673139 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.673143 | orchestrator | 2025-05-31 21:03:26.673148 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-05-31 21:03:26.673153 | orchestrator | Saturday 31 May 2025 21:01:25 +0000 (0:00:00.369) 0:09:03.767 ********** 2025-05-31 21:03:26.673158 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:03:26.673163 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:03:26.673167 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:03:26.673172 | orchestrator | 2025-05-31 21:03:26.673177 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-05-31 21:03:26.673182 | orchestrator | Saturday 31 May 2025 21:01:26 +0000 (0:00:00.737) 0:09:04.504 ********** 2025-05-31 21:03:26.673187 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:03:26.673191 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:03:26.673196 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:03:26.673200 | orchestrator | 2025-05-31 21:03:26.673205 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-05-31 21:03:26.673209 | orchestrator | Saturday 31 May 2025 21:01:27 +0000 (0:00:00.941) 0:09:05.446 ********** 2025-05-31 21:03:26.673214 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:03:26.673219 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:03:26.673223 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:03:26.673228 | orchestrator | 2025-05-31 21:03:26.673233 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-05-31 21:03:26.673237 | orchestrator | Saturday 31 May 2025 21:01:27 +0000 (0:00:00.783) 0:09:06.230 ********** 2025-05-31 21:03:26.673247 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.673251 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.673256 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.673260 | orchestrator | 2025-05-31 21:03:26.673265 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-05-31 21:03:26.673272 | orchestrator | Saturday 31 May 2025 21:01:28 +0000 (0:00:00.295) 0:09:06.526 ********** 2025-05-31 21:03:26.673277 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.673281 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.673286 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.673290 | orchestrator | 2025-05-31 21:03:26.673295 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-05-31 21:03:26.673299 | orchestrator | Saturday 31 May 2025 21:01:28 +0000 (0:00:00.284) 0:09:06.811 ********** 2025-05-31 21:03:26.673304 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.673308 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.673313 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.673317 | orchestrator | 2025-05-31 21:03:26.673322 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-05-31 21:03:26.673326 | orchestrator | Saturday 31 May 2025 21:01:28 +0000 (0:00:00.597) 0:09:07.408 ********** 2025-05-31 21:03:26.673331 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:03:26.673335 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:03:26.673340 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:03:26.673344 | orchestrator | 2025-05-31 21:03:26.673349 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-05-31 21:03:26.673354 | orchestrator | Saturday 31 May 2025 21:01:29 +0000 (0:00:00.719) 0:09:08.127 ********** 2025-05-31 21:03:26.673358 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:03:26.673363 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:03:26.673367 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:03:26.673372 | orchestrator | 2025-05-31 21:03:26.673376 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-05-31 21:03:26.673381 | orchestrator | Saturday 31 May 2025 21:01:30 +0000 (0:00:00.838) 0:09:08.966 ********** 2025-05-31 21:03:26.673385 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.673390 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.673394 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.673399 | orchestrator | 2025-05-31 21:03:26.673403 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-05-31 21:03:26.673414 | orchestrator | Saturday 31 May 2025 21:01:30 +0000 (0:00:00.328) 0:09:09.295 ********** 2025-05-31 21:03:26.673419 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.673424 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.673428 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.673433 | orchestrator | 2025-05-31 21:03:26.673437 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-05-31 21:03:26.673442 | orchestrator | Saturday 31 May 2025 21:01:31 +0000 (0:00:00.772) 0:09:10.067 ********** 2025-05-31 21:03:26.673450 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:03:26.673454 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:03:26.673459 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:03:26.673463 | orchestrator | 2025-05-31 21:03:26.673468 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-05-31 21:03:26.673472 | orchestrator | Saturday 31 May 2025 21:01:32 +0000 (0:00:00.441) 0:09:10.508 ********** 2025-05-31 21:03:26.673477 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:03:26.673482 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:03:26.673487 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:03:26.673491 | orchestrator | 2025-05-31 21:03:26.673495 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-05-31 21:03:26.673500 | orchestrator | Saturday 31 May 2025 21:01:32 +0000 (0:00:00.314) 0:09:10.822 ********** 2025-05-31 21:03:26.673505 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:03:26.673515 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:03:26.673520 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:03:26.673524 | orchestrator | 2025-05-31 21:03:26.673529 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-05-31 21:03:26.673533 | orchestrator | Saturday 31 May 2025 21:01:32 +0000 (0:00:00.355) 0:09:11.178 ********** 2025-05-31 21:03:26.673538 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.673542 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.673547 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.673551 | orchestrator | 2025-05-31 21:03:26.673556 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-05-31 21:03:26.673561 | orchestrator | Saturday 31 May 2025 21:01:33 +0000 (0:00:00.600) 0:09:11.779 ********** 2025-05-31 21:03:26.673565 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.673570 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.673574 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.673579 | orchestrator | 2025-05-31 21:03:26.673583 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-05-31 21:03:26.673588 | orchestrator | Saturday 31 May 2025 21:01:33 +0000 (0:00:00.318) 0:09:12.097 ********** 2025-05-31 21:03:26.673592 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.673597 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.673601 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.673606 | orchestrator | 2025-05-31 21:03:26.673610 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-05-31 21:03:26.673615 | orchestrator | Saturday 31 May 2025 21:01:33 +0000 (0:00:00.323) 0:09:12.421 ********** 2025-05-31 21:03:26.673620 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:03:26.673624 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:03:26.673629 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:03:26.673633 | orchestrator | 2025-05-31 21:03:26.673638 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-05-31 21:03:26.673642 | orchestrator | Saturday 31 May 2025 21:01:34 +0000 (0:00:00.361) 0:09:12.782 ********** 2025-05-31 21:03:26.673647 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:03:26.673651 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:03:26.673656 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:03:26.673660 | orchestrator | 2025-05-31 21:03:26.673665 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2025-05-31 21:03:26.673669 | orchestrator | Saturday 31 May 2025 21:01:35 +0000 (0:00:00.825) 0:09:13.608 ********** 2025-05-31 21:03:26.673674 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.673679 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.673683 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-05-31 21:03:26.673688 | orchestrator | 2025-05-31 21:03:26.673692 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2025-05-31 21:03:26.673697 | orchestrator | Saturday 31 May 2025 21:01:35 +0000 (0:00:00.460) 0:09:14.069 ********** 2025-05-31 21:03:26.673702 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-31 21:03:26.673706 | orchestrator | 2025-05-31 21:03:26.673711 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2025-05-31 21:03:26.673715 | orchestrator | Saturday 31 May 2025 21:01:37 +0000 (0:00:02.046) 0:09:16.115 ********** 2025-05-31 21:03:26.673721 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-05-31 21:03:26.673726 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.673731 | orchestrator | 2025-05-31 21:03:26.673735 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2025-05-31 21:03:26.673740 | orchestrator | Saturday 31 May 2025 21:01:37 +0000 (0:00:00.215) 0:09:16.331 ********** 2025-05-31 21:03:26.673746 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-31 21:03:26.673779 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-31 21:03:26.673785 | orchestrator | 2025-05-31 21:03:26.673789 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2025-05-31 21:03:26.673794 | orchestrator | Saturday 31 May 2025 21:01:46 +0000 (0:00:08.516) 0:09:24.847 ********** 2025-05-31 21:03:26.673799 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-31 21:03:26.673803 | orchestrator | 2025-05-31 21:03:26.673808 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2025-05-31 21:03:26.673812 | orchestrator | Saturday 31 May 2025 21:01:50 +0000 (0:00:03.822) 0:09:28.670 ********** 2025-05-31 21:03:26.673820 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 21:03:26.673824 | orchestrator | 2025-05-31 21:03:26.673829 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2025-05-31 21:03:26.673834 | orchestrator | Saturday 31 May 2025 21:01:50 +0000 (0:00:00.550) 0:09:29.220 ********** 2025-05-31 21:03:26.673838 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-05-31 21:03:26.673843 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-05-31 21:03:26.673847 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-05-31 21:03:26.673852 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-05-31 21:03:26.673872 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-05-31 21:03:26.673877 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-05-31 21:03:26.673881 | orchestrator | 2025-05-31 21:03:26.673886 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2025-05-31 21:03:26.673890 | orchestrator | Saturday 31 May 2025 21:01:51 +0000 (0:00:01.182) 0:09:30.403 ********** 2025-05-31 21:03:26.673895 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-31 21:03:26.673899 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-31 21:03:26.673904 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-31 21:03:26.673908 | orchestrator | 2025-05-31 21:03:26.673913 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2025-05-31 21:03:26.673917 | orchestrator | Saturday 31 May 2025 21:01:54 +0000 (0:00:02.295) 0:09:32.699 ********** 2025-05-31 21:03:26.673922 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-31 21:03:26.673927 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-31 21:03:26.673931 | orchestrator | changed: [testbed-node-3] 2025-05-31 21:03:26.673936 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-31 21:03:26.673940 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-31 21:03:26.673945 | orchestrator | changed: [testbed-node-5] 2025-05-31 21:03:26.673949 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-31 21:03:26.673954 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-31 21:03:26.673958 | orchestrator | changed: [testbed-node-4] 2025-05-31 21:03:26.673963 | orchestrator | 2025-05-31 21:03:26.673967 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2025-05-31 21:03:26.673972 | orchestrator | Saturday 31 May 2025 21:01:55 +0000 (0:00:01.595) 0:09:34.294 ********** 2025-05-31 21:03:26.673976 | orchestrator | changed: [testbed-node-4] 2025-05-31 21:03:26.673981 | orchestrator | changed: [testbed-node-3] 2025-05-31 21:03:26.673989 | orchestrator | changed: [testbed-node-5] 2025-05-31 21:03:26.673994 | orchestrator | 2025-05-31 21:03:26.673998 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2025-05-31 21:03:26.674003 | orchestrator | Saturday 31 May 2025 21:01:58 +0000 (0:00:02.861) 0:09:37.156 ********** 2025-05-31 21:03:26.674008 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.674012 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.674043 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.674048 | orchestrator | 2025-05-31 21:03:26.674053 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2025-05-31 21:03:26.674057 | orchestrator | Saturday 31 May 2025 21:01:59 +0000 (0:00:00.317) 0:09:37.473 ********** 2025-05-31 21:03:26.674065 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 21:03:26.674070 | orchestrator | 2025-05-31 21:03:26.674075 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2025-05-31 21:03:26.674079 | orchestrator | Saturday 31 May 2025 21:01:59 +0000 (0:00:00.900) 0:09:38.373 ********** 2025-05-31 21:03:26.674084 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 21:03:26.674088 | orchestrator | 2025-05-31 21:03:26.674093 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2025-05-31 21:03:26.674097 | orchestrator | Saturday 31 May 2025 21:02:00 +0000 (0:00:00.545) 0:09:38.918 ********** 2025-05-31 21:03:26.674101 | orchestrator | changed: [testbed-node-3] 2025-05-31 21:03:26.674106 | orchestrator | changed: [testbed-node-4] 2025-05-31 21:03:26.674111 | orchestrator | changed: [testbed-node-5] 2025-05-31 21:03:26.674115 | orchestrator | 2025-05-31 21:03:26.674120 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2025-05-31 21:03:26.674124 | orchestrator | Saturday 31 May 2025 21:02:01 +0000 (0:00:01.195) 0:09:40.114 ********** 2025-05-31 21:03:26.674129 | orchestrator | changed: [testbed-node-3] 2025-05-31 21:03:26.674133 | orchestrator | changed: [testbed-node-4] 2025-05-31 21:03:26.674138 | orchestrator | changed: [testbed-node-5] 2025-05-31 21:03:26.674142 | orchestrator | 2025-05-31 21:03:26.674147 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2025-05-31 21:03:26.674151 | orchestrator | Saturday 31 May 2025 21:02:03 +0000 (0:00:01.449) 0:09:41.564 ********** 2025-05-31 21:03:26.674156 | orchestrator | changed: [testbed-node-3] 2025-05-31 21:03:26.674160 | orchestrator | changed: [testbed-node-4] 2025-05-31 21:03:26.674165 | orchestrator | changed: [testbed-node-5] 2025-05-31 21:03:26.674169 | orchestrator | 2025-05-31 21:03:26.674174 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2025-05-31 21:03:26.674178 | orchestrator | Saturday 31 May 2025 21:02:04 +0000 (0:00:01.766) 0:09:43.331 ********** 2025-05-31 21:03:26.674182 | orchestrator | changed: [testbed-node-3] 2025-05-31 21:03:26.674187 | orchestrator | changed: [testbed-node-4] 2025-05-31 21:03:26.674191 | orchestrator | changed: [testbed-node-5] 2025-05-31 21:03:26.674196 | orchestrator | 2025-05-31 21:03:26.674200 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2025-05-31 21:03:26.674205 | orchestrator | Saturday 31 May 2025 21:02:06 +0000 (0:00:01.845) 0:09:45.177 ********** 2025-05-31 21:03:26.674209 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:03:26.674217 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:03:26.674222 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:03:26.674226 | orchestrator | 2025-05-31 21:03:26.674231 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-05-31 21:03:26.674235 | orchestrator | Saturday 31 May 2025 21:02:08 +0000 (0:00:01.405) 0:09:46.583 ********** 2025-05-31 21:03:26.674240 | orchestrator | changed: [testbed-node-3] 2025-05-31 21:03:26.674244 | orchestrator | changed: [testbed-node-4] 2025-05-31 21:03:26.674249 | orchestrator | changed: [testbed-node-5] 2025-05-31 21:03:26.674253 | orchestrator | 2025-05-31 21:03:26.674258 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-05-31 21:03:26.674266 | orchestrator | Saturday 31 May 2025 21:02:08 +0000 (0:00:00.683) 0:09:47.267 ********** 2025-05-31 21:03:26.674271 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 21:03:26.674276 | orchestrator | 2025-05-31 21:03:26.674280 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-05-31 21:03:26.674285 | orchestrator | Saturday 31 May 2025 21:02:09 +0000 (0:00:00.751) 0:09:48.018 ********** 2025-05-31 21:03:26.674289 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:03:26.674294 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:03:26.674298 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:03:26.674303 | orchestrator | 2025-05-31 21:03:26.674307 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-05-31 21:03:26.674312 | orchestrator | Saturday 31 May 2025 21:02:09 +0000 (0:00:00.319) 0:09:48.338 ********** 2025-05-31 21:03:26.674316 | orchestrator | changed: [testbed-node-3] 2025-05-31 21:03:26.674321 | orchestrator | changed: [testbed-node-4] 2025-05-31 21:03:26.674325 | orchestrator | changed: [testbed-node-5] 2025-05-31 21:03:26.674330 | orchestrator | 2025-05-31 21:03:26.674334 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-05-31 21:03:26.674339 | orchestrator | Saturday 31 May 2025 21:02:11 +0000 (0:00:01.158) 0:09:49.496 ********** 2025-05-31 21:03:26.674343 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-31 21:03:26.674348 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-31 21:03:26.674352 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-31 21:03:26.674357 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.674361 | orchestrator | 2025-05-31 21:03:26.674366 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-05-31 21:03:26.674370 | orchestrator | Saturday 31 May 2025 21:02:11 +0000 (0:00:00.863) 0:09:50.360 ********** 2025-05-31 21:03:26.674375 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:03:26.674379 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:03:26.674384 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:03:26.674388 | orchestrator | 2025-05-31 21:03:26.674393 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-05-31 21:03:26.674397 | orchestrator | 2025-05-31 21:03:26.674402 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-05-31 21:03:26.674407 | orchestrator | Saturday 31 May 2025 21:02:12 +0000 (0:00:00.793) 0:09:51.153 ********** 2025-05-31 21:03:26.674411 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 21:03:26.674416 | orchestrator | 2025-05-31 21:03:26.674420 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-05-31 21:03:26.674425 | orchestrator | Saturday 31 May 2025 21:02:13 +0000 (0:00:00.487) 0:09:51.640 ********** 2025-05-31 21:03:26.674432 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 21:03:26.674437 | orchestrator | 2025-05-31 21:03:26.674441 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-05-31 21:03:26.674446 | orchestrator | Saturday 31 May 2025 21:02:13 +0000 (0:00:00.712) 0:09:52.352 ********** 2025-05-31 21:03:26.674450 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.674455 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.674459 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.674464 | orchestrator | 2025-05-31 21:03:26.674468 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-05-31 21:03:26.674473 | orchestrator | Saturday 31 May 2025 21:02:14 +0000 (0:00:00.301) 0:09:52.654 ********** 2025-05-31 21:03:26.674477 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:03:26.674482 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:03:26.674486 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:03:26.674490 | orchestrator | 2025-05-31 21:03:26.674495 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-05-31 21:03:26.674503 | orchestrator | Saturday 31 May 2025 21:02:14 +0000 (0:00:00.704) 0:09:53.358 ********** 2025-05-31 21:03:26.674507 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:03:26.674512 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:03:26.674516 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:03:26.674521 | orchestrator | 2025-05-31 21:03:26.674525 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-05-31 21:03:26.674530 | orchestrator | Saturday 31 May 2025 21:02:15 +0000 (0:00:00.669) 0:09:54.028 ********** 2025-05-31 21:03:26.674534 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:03:26.674539 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:03:26.674543 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:03:26.674548 | orchestrator | 2025-05-31 21:03:26.674552 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-05-31 21:03:26.674557 | orchestrator | Saturday 31 May 2025 21:02:16 +0000 (0:00:01.019) 0:09:55.047 ********** 2025-05-31 21:03:26.674561 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.674566 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.674570 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.674575 | orchestrator | 2025-05-31 21:03:26.674579 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-05-31 21:03:26.674584 | orchestrator | Saturday 31 May 2025 21:02:16 +0000 (0:00:00.318) 0:09:55.366 ********** 2025-05-31 21:03:26.674588 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.674593 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.674598 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.674602 | orchestrator | 2025-05-31 21:03:26.674609 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-05-31 21:03:26.674614 | orchestrator | Saturday 31 May 2025 21:02:17 +0000 (0:00:00.298) 0:09:55.664 ********** 2025-05-31 21:03:26.674619 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.674623 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.674628 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.674632 | orchestrator | 2025-05-31 21:03:26.674637 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-05-31 21:03:26.674641 | orchestrator | Saturday 31 May 2025 21:02:17 +0000 (0:00:00.306) 0:09:55.971 ********** 2025-05-31 21:03:26.674646 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:03:26.674650 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:03:26.674655 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:03:26.674659 | orchestrator | 2025-05-31 21:03:26.674664 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-05-31 21:03:26.674668 | orchestrator | Saturday 31 May 2025 21:02:18 +0000 (0:00:00.980) 0:09:56.952 ********** 2025-05-31 21:03:26.674673 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:03:26.674677 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:03:26.674682 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:03:26.674686 | orchestrator | 2025-05-31 21:03:26.674691 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-05-31 21:03:26.674695 | orchestrator | Saturday 31 May 2025 21:02:19 +0000 (0:00:00.714) 0:09:57.667 ********** 2025-05-31 21:03:26.674700 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.674704 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.674709 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.674713 | orchestrator | 2025-05-31 21:03:26.674718 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-05-31 21:03:26.674722 | orchestrator | Saturday 31 May 2025 21:02:19 +0000 (0:00:00.304) 0:09:57.971 ********** 2025-05-31 21:03:26.674727 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.674731 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.674736 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.674740 | orchestrator | 2025-05-31 21:03:26.674745 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-05-31 21:03:26.674749 | orchestrator | Saturday 31 May 2025 21:02:19 +0000 (0:00:00.294) 0:09:58.266 ********** 2025-05-31 21:03:26.674759 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:03:26.674763 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:03:26.674768 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:03:26.674772 | orchestrator | 2025-05-31 21:03:26.674777 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-05-31 21:03:26.674781 | orchestrator | Saturday 31 May 2025 21:02:20 +0000 (0:00:00.590) 0:09:58.856 ********** 2025-05-31 21:03:26.674786 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:03:26.674790 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:03:26.674795 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:03:26.674799 | orchestrator | 2025-05-31 21:03:26.674804 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-05-31 21:03:26.674808 | orchestrator | Saturday 31 May 2025 21:02:20 +0000 (0:00:00.336) 0:09:59.192 ********** 2025-05-31 21:03:26.674813 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:03:26.674817 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:03:26.674822 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:03:26.674826 | orchestrator | 2025-05-31 21:03:26.674830 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-05-31 21:03:26.674835 | orchestrator | Saturday 31 May 2025 21:02:21 +0000 (0:00:00.316) 0:09:59.509 ********** 2025-05-31 21:03:26.674839 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.674844 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.674849 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.674853 | orchestrator | 2025-05-31 21:03:26.674870 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-05-31 21:03:26.674875 | orchestrator | Saturday 31 May 2025 21:02:21 +0000 (0:00:00.315) 0:09:59.825 ********** 2025-05-31 21:03:26.674879 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.674884 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.674889 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.674893 | orchestrator | 2025-05-31 21:03:26.674898 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-05-31 21:03:26.674902 | orchestrator | Saturday 31 May 2025 21:02:21 +0000 (0:00:00.575) 0:10:00.400 ********** 2025-05-31 21:03:26.674907 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.674911 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.674916 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.674920 | orchestrator | 2025-05-31 21:03:26.674925 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-05-31 21:03:26.674929 | orchestrator | Saturday 31 May 2025 21:02:22 +0000 (0:00:00.295) 0:10:00.696 ********** 2025-05-31 21:03:26.674934 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:03:26.674939 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:03:26.674943 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:03:26.674948 | orchestrator | 2025-05-31 21:03:26.674952 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-05-31 21:03:26.674957 | orchestrator | Saturday 31 May 2025 21:02:22 +0000 (0:00:00.320) 0:10:01.016 ********** 2025-05-31 21:03:26.674961 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:03:26.674966 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:03:26.674974 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:03:26.674982 | orchestrator | 2025-05-31 21:03:26.674991 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2025-05-31 21:03:26.674999 | orchestrator | Saturday 31 May 2025 21:02:23 +0000 (0:00:00.772) 0:10:01.788 ********** 2025-05-31 21:03:26.675010 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 21:03:26.675019 | orchestrator | 2025-05-31 21:03:26.675027 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-05-31 21:03:26.675035 | orchestrator | Saturday 31 May 2025 21:02:23 +0000 (0:00:00.516) 0:10:02.305 ********** 2025-05-31 21:03:26.675042 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-31 21:03:26.675055 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-31 21:03:26.675062 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-31 21:03:26.675070 | orchestrator | 2025-05-31 21:03:26.675082 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-05-31 21:03:26.675089 | orchestrator | Saturday 31 May 2025 21:02:25 +0000 (0:00:02.130) 0:10:04.436 ********** 2025-05-31 21:03:26.675097 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-31 21:03:26.675104 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-31 21:03:26.675112 | orchestrator | changed: [testbed-node-3] 2025-05-31 21:03:26.675120 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-31 21:03:26.675127 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-31 21:03:26.675134 | orchestrator | changed: [testbed-node-4] 2025-05-31 21:03:26.675142 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-31 21:03:26.675150 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-31 21:03:26.675157 | orchestrator | changed: [testbed-node-5] 2025-05-31 21:03:26.675165 | orchestrator | 2025-05-31 21:03:26.675173 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2025-05-31 21:03:26.675181 | orchestrator | Saturday 31 May 2025 21:02:27 +0000 (0:00:01.409) 0:10:05.846 ********** 2025-05-31 21:03:26.675189 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.675198 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.675205 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.675213 | orchestrator | 2025-05-31 21:03:26.675220 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2025-05-31 21:03:26.675228 | orchestrator | Saturday 31 May 2025 21:02:27 +0000 (0:00:00.341) 0:10:06.188 ********** 2025-05-31 21:03:26.675235 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 21:03:26.675242 | orchestrator | 2025-05-31 21:03:26.675249 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2025-05-31 21:03:26.675257 | orchestrator | Saturday 31 May 2025 21:02:28 +0000 (0:00:00.527) 0:10:06.715 ********** 2025-05-31 21:03:26.675266 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-05-31 21:03:26.675275 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-05-31 21:03:26.675283 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-05-31 21:03:26.675292 | orchestrator | 2025-05-31 21:03:26.675300 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2025-05-31 21:03:26.675308 | orchestrator | Saturday 31 May 2025 21:02:29 +0000 (0:00:01.278) 0:10:07.994 ********** 2025-05-31 21:03:26.675317 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-31 21:03:26.675325 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-31 21:03:26.675334 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-05-31 21:03:26.675347 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-05-31 21:03:26.675355 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-31 21:03:26.675364 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-05-31 21:03:26.675372 | orchestrator | 2025-05-31 21:03:26.675380 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-05-31 21:03:26.675389 | orchestrator | Saturday 31 May 2025 21:02:33 +0000 (0:00:04.194) 0:10:12.188 ********** 2025-05-31 21:03:26.675403 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-31 21:03:26.675411 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-31 21:03:26.675420 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-31 21:03:26.675427 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-31 21:03:26.675435 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-31 21:03:26.675443 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-31 21:03:26.675450 | orchestrator | 2025-05-31 21:03:26.675458 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-05-31 21:03:26.675466 | orchestrator | Saturday 31 May 2025 21:02:35 +0000 (0:00:02.151) 0:10:14.340 ********** 2025-05-31 21:03:26.675474 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-31 21:03:26.675482 | orchestrator | changed: [testbed-node-3] 2025-05-31 21:03:26.675491 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-31 21:03:26.675499 | orchestrator | changed: [testbed-node-4] 2025-05-31 21:03:26.675507 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-31 21:03:26.675515 | orchestrator | changed: [testbed-node-5] 2025-05-31 21:03:26.675524 | orchestrator | 2025-05-31 21:03:26.675532 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2025-05-31 21:03:26.675541 | orchestrator | Saturday 31 May 2025 21:02:37 +0000 (0:00:01.187) 0:10:15.528 ********** 2025-05-31 21:03:26.675549 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-05-31 21:03:26.675558 | orchestrator | 2025-05-31 21:03:26.675566 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2025-05-31 21:03:26.675578 | orchestrator | Saturday 31 May 2025 21:02:37 +0000 (0:00:00.224) 0:10:15.752 ********** 2025-05-31 21:03:26.675586 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-31 21:03:26.675594 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-31 21:03:26.675602 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-31 21:03:26.675611 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-31 21:03:26.675619 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-31 21:03:26.675627 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.675636 | orchestrator | 2025-05-31 21:03:26.675644 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2025-05-31 21:03:26.675652 | orchestrator | Saturday 31 May 2025 21:02:38 +0000 (0:00:01.078) 0:10:16.831 ********** 2025-05-31 21:03:26.675660 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-31 21:03:26.675667 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-31 21:03:26.675675 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-31 21:03:26.675682 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-31 21:03:26.675690 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-31 21:03:26.675698 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.675706 | orchestrator | 2025-05-31 21:03:26.675720 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2025-05-31 21:03:26.675727 | orchestrator | Saturday 31 May 2025 21:02:38 +0000 (0:00:00.572) 0:10:17.404 ********** 2025-05-31 21:03:26.675735 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-31 21:03:26.675742 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-31 21:03:26.675750 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-31 21:03:26.675766 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-31 21:03:26.675774 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-31 21:03:26.675782 | orchestrator | 2025-05-31 21:03:26.675789 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2025-05-31 21:03:26.675797 | orchestrator | Saturday 31 May 2025 21:03:10 +0000 (0:00:31.589) 0:10:48.994 ********** 2025-05-31 21:03:26.675804 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.675813 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.675821 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.675830 | orchestrator | 2025-05-31 21:03:26.675838 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2025-05-31 21:03:26.675846 | orchestrator | Saturday 31 May 2025 21:03:10 +0000 (0:00:00.296) 0:10:49.290 ********** 2025-05-31 21:03:26.675899 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.675909 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.675917 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.675924 | orchestrator | 2025-05-31 21:03:26.675932 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2025-05-31 21:03:26.675939 | orchestrator | Saturday 31 May 2025 21:03:11 +0000 (0:00:00.303) 0:10:49.594 ********** 2025-05-31 21:03:26.675947 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 21:03:26.675955 | orchestrator | 2025-05-31 21:03:26.675963 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2025-05-31 21:03:26.675972 | orchestrator | Saturday 31 May 2025 21:03:11 +0000 (0:00:00.756) 0:10:50.350 ********** 2025-05-31 21:03:26.675980 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 21:03:26.675987 | orchestrator | 2025-05-31 21:03:26.675994 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2025-05-31 21:03:26.676002 | orchestrator | Saturday 31 May 2025 21:03:12 +0000 (0:00:00.524) 0:10:50.874 ********** 2025-05-31 21:03:26.676009 | orchestrator | changed: [testbed-node-3] 2025-05-31 21:03:26.676017 | orchestrator | changed: [testbed-node-4] 2025-05-31 21:03:26.676024 | orchestrator | changed: [testbed-node-5] 2025-05-31 21:03:26.676031 | orchestrator | 2025-05-31 21:03:26.676047 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2025-05-31 21:03:26.676055 | orchestrator | Saturday 31 May 2025 21:03:13 +0000 (0:00:01.181) 0:10:52.056 ********** 2025-05-31 21:03:26.676063 | orchestrator | changed: [testbed-node-3] 2025-05-31 21:03:26.676070 | orchestrator | changed: [testbed-node-4] 2025-05-31 21:03:26.676078 | orchestrator | changed: [testbed-node-5] 2025-05-31 21:03:26.676085 | orchestrator | 2025-05-31 21:03:26.676093 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2025-05-31 21:03:26.676100 | orchestrator | Saturday 31 May 2025 21:03:15 +0000 (0:00:01.615) 0:10:53.671 ********** 2025-05-31 21:03:26.676108 | orchestrator | changed: [testbed-node-4] 2025-05-31 21:03:26.676122 | orchestrator | changed: [testbed-node-3] 2025-05-31 21:03:26.676130 | orchestrator | changed: [testbed-node-5] 2025-05-31 21:03:26.676137 | orchestrator | 2025-05-31 21:03:26.676144 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2025-05-31 21:03:26.676152 | orchestrator | Saturday 31 May 2025 21:03:18 +0000 (0:00:02.768) 0:10:56.440 ********** 2025-05-31 21:03:26.676159 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-05-31 21:03:26.676166 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-05-31 21:03:26.676174 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-05-31 21:03:26.676181 | orchestrator | 2025-05-31 21:03:26.676189 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-05-31 21:03:26.676196 | orchestrator | Saturday 31 May 2025 21:03:20 +0000 (0:00:02.523) 0:10:58.963 ********** 2025-05-31 21:03:26.676204 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.676211 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.676219 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.676226 | orchestrator | 2025-05-31 21:03:26.676234 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-05-31 21:03:26.676242 | orchestrator | Saturday 31 May 2025 21:03:20 +0000 (0:00:00.335) 0:10:59.298 ********** 2025-05-31 21:03:26.676250 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 21:03:26.676257 | orchestrator | 2025-05-31 21:03:26.676264 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-05-31 21:03:26.676272 | orchestrator | Saturday 31 May 2025 21:03:21 +0000 (0:00:00.492) 0:10:59.791 ********** 2025-05-31 21:03:26.676281 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:03:26.676288 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:03:26.676296 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:03:26.676302 | orchestrator | 2025-05-31 21:03:26.676310 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-05-31 21:03:26.676318 | orchestrator | Saturday 31 May 2025 21:03:21 +0000 (0:00:00.587) 0:11:00.379 ********** 2025-05-31 21:03:26.676326 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.676333 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:03:26.676341 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:03:26.676348 | orchestrator | 2025-05-31 21:03:26.676356 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-05-31 21:03:26.676363 | orchestrator | Saturday 31 May 2025 21:03:22 +0000 (0:00:00.334) 0:11:00.713 ********** 2025-05-31 21:03:26.676371 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-31 21:03:26.676389 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-31 21:03:26.676399 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-31 21:03:26.676407 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:03:26.676413 | orchestrator | 2025-05-31 21:03:26.676420 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-05-31 21:03:26.676427 | orchestrator | Saturday 31 May 2025 21:03:22 +0000 (0:00:00.598) 0:11:01.311 ********** 2025-05-31 21:03:26.676434 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:03:26.676440 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:03:26.676447 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:03:26.676454 | orchestrator | 2025-05-31 21:03:26.676461 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 21:03:26.676467 | orchestrator | testbed-node-0 : ok=141  changed=36  unreachable=0 failed=0 skipped=135  rescued=0 ignored=0 2025-05-31 21:03:26.676475 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2025-05-31 21:03:26.676487 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2025-05-31 21:03:26.676494 | orchestrator | testbed-node-3 : ok=186  changed=44  unreachable=0 failed=0 skipped=152  rescued=0 ignored=0 2025-05-31 21:03:26.676501 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2025-05-31 21:03:26.676508 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2025-05-31 21:03:26.676515 | orchestrator | 2025-05-31 21:03:26.676522 | orchestrator | 2025-05-31 21:03:26.676529 | orchestrator | 2025-05-31 21:03:26.676536 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-31 21:03:26.676543 | orchestrator | Saturday 31 May 2025 21:03:23 +0000 (0:00:00.244) 0:11:01.556 ********** 2025-05-31 21:03:26.676557 | orchestrator | =============================================================================== 2025-05-31 21:03:26.676563 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 61.49s 2025-05-31 21:03:26.676570 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 44.55s 2025-05-31 21:03:26.676580 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 31.59s 2025-05-31 21:03:26.676586 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 29.86s 2025-05-31 21:03:26.676593 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.94s 2025-05-31 21:03:26.676599 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 14.78s 2025-05-31 21:03:26.676606 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.86s 2025-05-31 21:03:26.676612 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.38s 2025-05-31 21:03:26.676619 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 9.98s 2025-05-31 21:03:26.676626 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.52s 2025-05-31 21:03:26.676632 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.31s 2025-05-31 21:03:26.676639 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.10s 2025-05-31 21:03:26.676646 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 4.74s 2025-05-31 21:03:26.676653 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.19s 2025-05-31 21:03:26.676660 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 3.92s 2025-05-31 21:03:26.676666 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.82s 2025-05-31 21:03:26.676673 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 3.80s 2025-05-31 21:03:26.676680 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.45s 2025-05-31 21:03:26.676687 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.29s 2025-05-31 21:03:26.676694 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 3.23s 2025-05-31 21:03:26.676701 | orchestrator | 2025-05-31 21:03:26 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:03:29.692211 | orchestrator | 2025-05-31 21:03:29 | INFO  | Task f093e11b-da87-4aa1-914f-a87f454420d6 is in state STARTED 2025-05-31 21:03:29.692316 | orchestrator | 2025-05-31 21:03:29 | INFO  | Task e346174a-3b83-4776-b276-e2a25f5ab226 is in state STARTED 2025-05-31 21:03:29.694114 | orchestrator | 2025-05-31 21:03:29 | INFO  | Task a81d4a59-ff86-4ed5-9241-373b495cc025 is in state STARTED 2025-05-31 21:03:29.694309 | orchestrator | 2025-05-31 21:03:29 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:03:32.741250 | orchestrator | 2025-05-31 21:03:32 | INFO  | Task f093e11b-da87-4aa1-914f-a87f454420d6 is in state STARTED 2025-05-31 21:03:32.744148 | orchestrator | 2025-05-31 21:03:32 | INFO  | Task e346174a-3b83-4776-b276-e2a25f5ab226 is in state STARTED 2025-05-31 21:03:32.746415 | orchestrator | 2025-05-31 21:03:32 | INFO  | Task a81d4a59-ff86-4ed5-9241-373b495cc025 is in state STARTED 2025-05-31 21:03:32.746545 | orchestrator | 2025-05-31 21:03:32 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:03:35.789531 | orchestrator | 2025-05-31 21:03:35 | INFO  | Task f093e11b-da87-4aa1-914f-a87f454420d6 is in state STARTED 2025-05-31 21:03:35.790180 | orchestrator | 2025-05-31 21:03:35 | INFO  | Task e346174a-3b83-4776-b276-e2a25f5ab226 is in state STARTED 2025-05-31 21:03:35.792620 | orchestrator | 2025-05-31 21:03:35 | INFO  | Task a81d4a59-ff86-4ed5-9241-373b495cc025 is in state STARTED 2025-05-31 21:03:35.792969 | orchestrator | 2025-05-31 21:03:35 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:03:38.836943 | orchestrator | 2025-05-31 21:03:38 | INFO  | Task f093e11b-da87-4aa1-914f-a87f454420d6 is in state STARTED 2025-05-31 21:03:38.839055 | orchestrator | 2025-05-31 21:03:38 | INFO  | Task e346174a-3b83-4776-b276-e2a25f5ab226 is in state STARTED 2025-05-31 21:03:38.840995 | orchestrator | 2025-05-31 21:03:38 | INFO  | Task a81d4a59-ff86-4ed5-9241-373b495cc025 is in state STARTED 2025-05-31 21:03:38.841052 | orchestrator | 2025-05-31 21:03:38 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:03:41.889516 | orchestrator | 2025-05-31 21:03:41 | INFO  | Task f093e11b-da87-4aa1-914f-a87f454420d6 is in state STARTED 2025-05-31 21:03:41.891579 | orchestrator | 2025-05-31 21:03:41 | INFO  | Task e346174a-3b83-4776-b276-e2a25f5ab226 is in state STARTED 2025-05-31 21:03:41.893546 | orchestrator | 2025-05-31 21:03:41 | INFO  | Task a81d4a59-ff86-4ed5-9241-373b495cc025 is in state STARTED 2025-05-31 21:03:41.894068 | orchestrator | 2025-05-31 21:03:41 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:03:44.932569 | orchestrator | 2025-05-31 21:03:44 | INFO  | Task f093e11b-da87-4aa1-914f-a87f454420d6 is in state STARTED 2025-05-31 21:03:44.933989 | orchestrator | 2025-05-31 21:03:44 | INFO  | Task e346174a-3b83-4776-b276-e2a25f5ab226 is in state STARTED 2025-05-31 21:03:44.935946 | orchestrator | 2025-05-31 21:03:44 | INFO  | Task a81d4a59-ff86-4ed5-9241-373b495cc025 is in state STARTED 2025-05-31 21:03:44.935992 | orchestrator | 2025-05-31 21:03:44 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:03:47.984451 | orchestrator | 2025-05-31 21:03:47 | INFO  | Task f093e11b-da87-4aa1-914f-a87f454420d6 is in state STARTED 2025-05-31 21:03:47.987192 | orchestrator | 2025-05-31 21:03:47 | INFO  | Task e346174a-3b83-4776-b276-e2a25f5ab226 is in state STARTED 2025-05-31 21:03:47.989158 | orchestrator | 2025-05-31 21:03:47 | INFO  | Task a81d4a59-ff86-4ed5-9241-373b495cc025 is in state STARTED 2025-05-31 21:03:47.989185 | orchestrator | 2025-05-31 21:03:47 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:03:51.037404 | orchestrator | 2025-05-31 21:03:51 | INFO  | Task f093e11b-da87-4aa1-914f-a87f454420d6 is in state STARTED 2025-05-31 21:03:51.037522 | orchestrator | 2025-05-31 21:03:51 | INFO  | Task e346174a-3b83-4776-b276-e2a25f5ab226 is in state STARTED 2025-05-31 21:03:51.039281 | orchestrator | 2025-05-31 21:03:51 | INFO  | Task a81d4a59-ff86-4ed5-9241-373b495cc025 is in state STARTED 2025-05-31 21:03:51.039311 | orchestrator | 2025-05-31 21:03:51 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:03:54.095097 | orchestrator | 2025-05-31 21:03:54 | INFO  | Task f093e11b-da87-4aa1-914f-a87f454420d6 is in state STARTED 2025-05-31 21:03:54.096700 | orchestrator | 2025-05-31 21:03:54 | INFO  | Task e346174a-3b83-4776-b276-e2a25f5ab226 is in state STARTED 2025-05-31 21:03:54.099004 | orchestrator | 2025-05-31 21:03:54 | INFO  | Task a81d4a59-ff86-4ed5-9241-373b495cc025 is in state STARTED 2025-05-31 21:03:54.099129 | orchestrator | 2025-05-31 21:03:54 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:03:57.138606 | orchestrator | 2025-05-31 21:03:57 | INFO  | Task f093e11b-da87-4aa1-914f-a87f454420d6 is in state STARTED 2025-05-31 21:03:57.140512 | orchestrator | 2025-05-31 21:03:57 | INFO  | Task e346174a-3b83-4776-b276-e2a25f5ab226 is in state STARTED 2025-05-31 21:03:57.142646 | orchestrator | 2025-05-31 21:03:57 | INFO  | Task a81d4a59-ff86-4ed5-9241-373b495cc025 is in state STARTED 2025-05-31 21:03:57.142667 | orchestrator | 2025-05-31 21:03:57 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:04:00.187513 | orchestrator | 2025-05-31 21:04:00 | INFO  | Task f093e11b-da87-4aa1-914f-a87f454420d6 is in state STARTED 2025-05-31 21:04:00.189113 | orchestrator | 2025-05-31 21:04:00 | INFO  | Task e346174a-3b83-4776-b276-e2a25f5ab226 is in state STARTED 2025-05-31 21:04:00.190211 | orchestrator | 2025-05-31 21:04:00 | INFO  | Task a81d4a59-ff86-4ed5-9241-373b495cc025 is in state STARTED 2025-05-31 21:04:00.190231 | orchestrator | 2025-05-31 21:04:00 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:04:03.243188 | orchestrator | 2025-05-31 21:04:03 | INFO  | Task f093e11b-da87-4aa1-914f-a87f454420d6 is in state STARTED 2025-05-31 21:04:03.244042 | orchestrator | 2025-05-31 21:04:03 | INFO  | Task e346174a-3b83-4776-b276-e2a25f5ab226 is in state STARTED 2025-05-31 21:04:03.245636 | orchestrator | 2025-05-31 21:04:03 | INFO  | Task a81d4a59-ff86-4ed5-9241-373b495cc025 is in state STARTED 2025-05-31 21:04:03.246156 | orchestrator | 2025-05-31 21:04:03 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:04:06.298702 | orchestrator | 2025-05-31 21:04:06 | INFO  | Task f093e11b-da87-4aa1-914f-a87f454420d6 is in state STARTED 2025-05-31 21:04:06.303548 | orchestrator | 2025-05-31 21:04:06 | INFO  | Task e346174a-3b83-4776-b276-e2a25f5ab226 is in state STARTED 2025-05-31 21:04:06.306073 | orchestrator | 2025-05-31 21:04:06 | INFO  | Task a81d4a59-ff86-4ed5-9241-373b495cc025 is in state STARTED 2025-05-31 21:04:06.306203 | orchestrator | 2025-05-31 21:04:06 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:04:09.353283 | orchestrator | 2025-05-31 21:04:09 | INFO  | Task f093e11b-da87-4aa1-914f-a87f454420d6 is in state STARTED 2025-05-31 21:04:09.354097 | orchestrator | 2025-05-31 21:04:09 | INFO  | Task e346174a-3b83-4776-b276-e2a25f5ab226 is in state STARTED 2025-05-31 21:04:09.355751 | orchestrator | 2025-05-31 21:04:09 | INFO  | Task a81d4a59-ff86-4ed5-9241-373b495cc025 is in state STARTED 2025-05-31 21:04:09.356094 | orchestrator | 2025-05-31 21:04:09 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:04:12.401738 | orchestrator | 2025-05-31 21:04:12 | INFO  | Task f093e11b-da87-4aa1-914f-a87f454420d6 is in state STARTED 2025-05-31 21:04:12.403527 | orchestrator | 2025-05-31 21:04:12 | INFO  | Task e346174a-3b83-4776-b276-e2a25f5ab226 is in state STARTED 2025-05-31 21:04:12.405384 | orchestrator | 2025-05-31 21:04:12 | INFO  | Task a81d4a59-ff86-4ed5-9241-373b495cc025 is in state STARTED 2025-05-31 21:04:12.405539 | orchestrator | 2025-05-31 21:04:12 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:04:15.447311 | orchestrator | 2025-05-31 21:04:15 | INFO  | Task f093e11b-da87-4aa1-914f-a87f454420d6 is in state STARTED 2025-05-31 21:04:15.449265 | orchestrator | 2025-05-31 21:04:15 | INFO  | Task e346174a-3b83-4776-b276-e2a25f5ab226 is in state STARTED 2025-05-31 21:04:15.451613 | orchestrator | 2025-05-31 21:04:15 | INFO  | Task a81d4a59-ff86-4ed5-9241-373b495cc025 is in state STARTED 2025-05-31 21:04:15.451710 | orchestrator | 2025-05-31 21:04:15 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:04:18.497006 | orchestrator | 2025-05-31 21:04:18 | INFO  | Task f093e11b-da87-4aa1-914f-a87f454420d6 is in state STARTED 2025-05-31 21:04:18.499042 | orchestrator | 2025-05-31 21:04:18 | INFO  | Task e346174a-3b83-4776-b276-e2a25f5ab226 is in state STARTED 2025-05-31 21:04:18.500763 | orchestrator | 2025-05-31 21:04:18 | INFO  | Task a81d4a59-ff86-4ed5-9241-373b495cc025 is in state STARTED 2025-05-31 21:04:18.500784 | orchestrator | 2025-05-31 21:04:18 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:04:21.542494 | orchestrator | 2025-05-31 21:04:21 | INFO  | Task f093e11b-da87-4aa1-914f-a87f454420d6 is in state SUCCESS 2025-05-31 21:04:21.545017 | orchestrator | 2025-05-31 21:04:21.545081 | orchestrator | 2025-05-31 21:04:21.545095 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-31 21:04:21.545108 | orchestrator | 2025-05-31 21:04:21.545119 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-31 21:04:21.545131 | orchestrator | Saturday 31 May 2025 21:01:24 +0000 (0:00:00.259) 0:00:00.259 ********** 2025-05-31 21:04:21.545142 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:04:21.545154 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:04:21.545165 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:04:21.545176 | orchestrator | 2025-05-31 21:04:21.545187 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-31 21:04:21.545198 | orchestrator | Saturday 31 May 2025 21:01:24 +0000 (0:00:00.279) 0:00:00.538 ********** 2025-05-31 21:04:21.545209 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-05-31 21:04:21.545220 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-05-31 21:04:21.545231 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-05-31 21:04:21.545242 | orchestrator | 2025-05-31 21:04:21.545253 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-05-31 21:04:21.545263 | orchestrator | 2025-05-31 21:04:21.545274 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-05-31 21:04:21.545284 | orchestrator | Saturday 31 May 2025 21:01:24 +0000 (0:00:00.403) 0:00:00.941 ********** 2025-05-31 21:04:21.545310 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 21:04:21.545323 | orchestrator | 2025-05-31 21:04:21.545333 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-05-31 21:04:21.545344 | orchestrator | Saturday 31 May 2025 21:01:25 +0000 (0:00:00.541) 0:00:01.482 ********** 2025-05-31 21:04:21.545355 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-31 21:04:21.545365 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-31 21:04:21.545376 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-31 21:04:21.545386 | orchestrator | 2025-05-31 21:04:21.545397 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-05-31 21:04:21.545408 | orchestrator | Saturday 31 May 2025 21:01:26 +0000 (0:00:01.702) 0:00:03.185 ********** 2025-05-31 21:04:21.545422 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-31 21:04:21.545469 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-31 21:04:21.545510 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-31 21:04:21.545533 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-31 21:04:21.545548 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-31 21:04:21.545569 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-31 21:04:21.545580 | orchestrator | 2025-05-31 21:04:21.545591 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-05-31 21:04:21.545602 | orchestrator | Saturday 31 May 2025 21:01:28 +0000 (0:00:01.893) 0:00:05.078 ********** 2025-05-31 21:04:21.545613 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 21:04:21.545624 | orchestrator | 2025-05-31 21:04:21.545635 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-05-31 21:04:21.545645 | orchestrator | Saturday 31 May 2025 21:01:29 +0000 (0:00:00.534) 0:00:05.612 ********** 2025-05-31 21:04:21.545668 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-31 21:04:21.545685 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-31 21:04:21.545697 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-31 21:04:21.545716 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-31 21:04:21.545736 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-31 21:04:21.545754 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-31 21:04:21.545766 | orchestrator | 2025-05-31 21:04:21.545778 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-05-31 21:04:21.545789 | orchestrator | Saturday 31 May 2025 21:01:32 +0000 (0:00:02.995) 0:00:08.608 ********** 2025-05-31 21:04:21.545800 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-31 21:04:21.545818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-31 21:04:21.545830 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:04:21.545842 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-31 21:04:21.545899 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-31 21:04:21.545914 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:04:21.545925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-31 21:04:21.545944 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-31 21:04:21.545956 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:04:21.545966 | orchestrator | 2025-05-31 21:04:21.545977 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-05-31 21:04:21.545989 | orchestrator | Saturday 31 May 2025 21:01:34 +0000 (0:00:01.652) 0:00:10.261 ********** 2025-05-31 21:04:21.546000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-31 21:04:21.546079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-31 21:04:21.546094 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:04:21.546119 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-31 21:04:21.546132 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-31 21:04:21.546143 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:04:21.546155 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-31 21:04:21.546176 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-31 21:04:21.546188 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:04:21.546199 | orchestrator | 2025-05-31 21:04:21.546210 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-05-31 21:04:21.546226 | orchestrator | Saturday 31 May 2025 21:01:34 +0000 (0:00:00.818) 0:00:11.080 ********** 2025-05-31 21:04:21.546242 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-31 21:04:21.546254 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-31 21:04:21.546267 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-31 21:04:21.546301 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-31 21:04:21.546324 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-31 21:04:21.546344 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-31 21:04:21.546356 | orchestrator | 2025-05-31 21:04:21.546367 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-05-31 21:04:21.546378 | orchestrator | Saturday 31 May 2025 21:01:37 +0000 (0:00:02.627) 0:00:13.707 ********** 2025-05-31 21:04:21.546389 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:04:21.546400 | orchestrator | changed: [testbed-node-1] 2025-05-31 21:04:21.546411 | orchestrator | changed: [testbed-node-2] 2025-05-31 21:04:21.546421 | orchestrator | 2025-05-31 21:04:21.546432 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-05-31 21:04:21.546443 | orchestrator | Saturday 31 May 2025 21:01:40 +0000 (0:00:02.706) 0:00:16.413 ********** 2025-05-31 21:04:21.546453 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:04:21.546464 | orchestrator | changed: [testbed-node-2] 2025-05-31 21:04:21.546474 | orchestrator | changed: [testbed-node-1] 2025-05-31 21:04:21.546485 | orchestrator | 2025-05-31 21:04:21.546495 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-05-31 21:04:21.546506 | orchestrator | Saturday 31 May 2025 21:01:42 +0000 (0:00:01.815) 0:00:18.228 ********** 2025-05-31 21:04:21.546517 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-31 21:04:21.546536 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'co2025-05-31 21:04:21 | INFO  | Task e346174a-3b83-4776-b276-e2a25f5ab226 is in state STARTED 2025-05-31 21:04:21.546554 | orchestrator | ntainer_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-31 21:04:21.546571 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-31 21:04:21.546584 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-31 21:04:21.546596 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-31 21:04:21.546616 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-31 21:04:21.546635 | orchestrator | 2025-05-31 21:04:21.546646 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-05-31 21:04:21.546657 | orchestrator | Saturday 31 May 2025 21:01:43 +0000 (0:00:01.908) 0:00:20.137 ********** 2025-05-31 21:04:21.546668 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:04:21.546683 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:04:21.546694 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:04:21.546705 | orchestrator | 2025-05-31 21:04:21.546844 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-05-31 21:04:21.546904 | orchestrator | Saturday 31 May 2025 21:01:44 +0000 (0:00:00.284) 0:00:20.421 ********** 2025-05-31 21:04:21.546925 | orchestrator | 2025-05-31 21:04:21.546943 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-05-31 21:04:21.546960 | orchestrator | Saturday 31 May 2025 21:01:44 +0000 (0:00:00.072) 0:00:20.494 ********** 2025-05-31 21:04:21.546971 | orchestrator | 2025-05-31 21:04:21.546981 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-05-31 21:04:21.546992 | orchestrator | Saturday 31 May 2025 21:01:44 +0000 (0:00:00.062) 0:00:20.556 ********** 2025-05-31 21:04:21.547002 | orchestrator | 2025-05-31 21:04:21.547013 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-05-31 21:04:21.547023 | orchestrator | Saturday 31 May 2025 21:01:44 +0000 (0:00:00.238) 0:00:20.795 ********** 2025-05-31 21:04:21.547034 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:04:21.547045 | orchestrator | 2025-05-31 21:04:21.547055 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-05-31 21:04:21.547066 | orchestrator | Saturday 31 May 2025 21:01:44 +0000 (0:00:00.195) 0:00:20.990 ********** 2025-05-31 21:04:21.547076 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:04:21.547087 | orchestrator | 2025-05-31 21:04:21.547098 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-05-31 21:04:21.547108 | orchestrator | Saturday 31 May 2025 21:01:44 +0000 (0:00:00.193) 0:00:21.184 ********** 2025-05-31 21:04:21.547119 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:04:21.547133 | orchestrator | changed: [testbed-node-1] 2025-05-31 21:04:21.547152 | orchestrator | changed: [testbed-node-2] 2025-05-31 21:04:21.547170 | orchestrator | 2025-05-31 21:04:21.547189 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-05-31 21:04:21.547207 | orchestrator | Saturday 31 May 2025 21:02:49 +0000 (0:01:04.669) 0:01:25.853 ********** 2025-05-31 21:04:21.547218 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:04:21.547229 | orchestrator | changed: [testbed-node-1] 2025-05-31 21:04:21.547239 | orchestrator | changed: [testbed-node-2] 2025-05-31 21:04:21.547249 | orchestrator | 2025-05-31 21:04:21.547260 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-05-31 21:04:21.547270 | orchestrator | Saturday 31 May 2025 21:04:09 +0000 (0:01:19.786) 0:02:45.640 ********** 2025-05-31 21:04:21.547282 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 21:04:21.547301 | orchestrator | 2025-05-31 21:04:21.547318 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-05-31 21:04:21.547351 | orchestrator | Saturday 31 May 2025 21:04:09 +0000 (0:00:00.540) 0:02:46.180 ********** 2025-05-31 21:04:21.547369 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:04:21.547386 | orchestrator | 2025-05-31 21:04:21.547403 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-05-31 21:04:21.547421 | orchestrator | Saturday 31 May 2025 21:04:12 +0000 (0:00:02.127) 0:02:48.308 ********** 2025-05-31 21:04:21.547440 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:04:21.547459 | orchestrator | 2025-05-31 21:04:21.547479 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-05-31 21:04:21.547498 | orchestrator | Saturday 31 May 2025 21:04:14 +0000 (0:00:02.093) 0:02:50.401 ********** 2025-05-31 21:04:21.547519 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:04:21.547538 | orchestrator | 2025-05-31 21:04:21.547557 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-05-31 21:04:21.547577 | orchestrator | Saturday 31 May 2025 21:04:16 +0000 (0:00:02.491) 0:02:52.893 ********** 2025-05-31 21:04:21.547596 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:04:21.547617 | orchestrator | 2025-05-31 21:04:21.547635 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 21:04:21.547659 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-31 21:04:21.547680 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-31 21:04:21.547713 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-31 21:04:21.547734 | orchestrator | 2025-05-31 21:04:21.547753 | orchestrator | 2025-05-31 21:04:21.547771 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-31 21:04:21.547794 | orchestrator | Saturday 31 May 2025 21:04:19 +0000 (0:00:02.392) 0:02:55.286 ********** 2025-05-31 21:04:21.547815 | orchestrator | =============================================================================== 2025-05-31 21:04:21.547834 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 79.79s 2025-05-31 21:04:21.547920 | orchestrator | opensearch : Restart opensearch container ------------------------------ 64.67s 2025-05-31 21:04:21.547945 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 3.00s 2025-05-31 21:04:21.547964 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.71s 2025-05-31 21:04:21.547983 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.63s 2025-05-31 21:04:21.548002 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.49s 2025-05-31 21:04:21.548020 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.39s 2025-05-31 21:04:21.548038 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.13s 2025-05-31 21:04:21.548059 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.09s 2025-05-31 21:04:21.548071 | orchestrator | opensearch : Check opensearch containers -------------------------------- 1.91s 2025-05-31 21:04:21.548081 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.89s 2025-05-31 21:04:21.548092 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.82s 2025-05-31 21:04:21.548103 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 1.70s 2025-05-31 21:04:21.548113 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.65s 2025-05-31 21:04:21.548124 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 0.82s 2025-05-31 21:04:21.548134 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.54s 2025-05-31 21:04:21.548145 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.54s 2025-05-31 21:04:21.548166 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.53s 2025-05-31 21:04:21.548177 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.40s 2025-05-31 21:04:21.548188 | orchestrator | opensearch : Flush handlers --------------------------------------------- 0.37s 2025-05-31 21:04:21.548198 | orchestrator | 2025-05-31 21:04:21 | INFO  | Task a81d4a59-ff86-4ed5-9241-373b495cc025 is in state STARTED 2025-05-31 21:04:21.548209 | orchestrator | 2025-05-31 21:04:21 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:04:24.592209 | orchestrator | 2025-05-31 21:04:24 | INFO  | Task e346174a-3b83-4776-b276-e2a25f5ab226 is in state STARTED 2025-05-31 21:04:24.594300 | orchestrator | 2025-05-31 21:04:24 | INFO  | Task a81d4a59-ff86-4ed5-9241-373b495cc025 is in state STARTED 2025-05-31 21:04:24.594365 | orchestrator | 2025-05-31 21:04:24 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:04:27.640673 | orchestrator | 2025-05-31 21:04:27 | INFO  | Task e346174a-3b83-4776-b276-e2a25f5ab226 is in state STARTED 2025-05-31 21:04:27.642776 | orchestrator | 2025-05-31 21:04:27 | INFO  | Task a81d4a59-ff86-4ed5-9241-373b495cc025 is in state STARTED 2025-05-31 21:04:27.642944 | orchestrator | 2025-05-31 21:04:27 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:04:30.686376 | orchestrator | 2025-05-31 21:04:30 | INFO  | Task e346174a-3b83-4776-b276-e2a25f5ab226 is in state STARTED 2025-05-31 21:04:30.686486 | orchestrator | 2025-05-31 21:04:30 | INFO  | Task a81d4a59-ff86-4ed5-9241-373b495cc025 is in state STARTED 2025-05-31 21:04:30.686503 | orchestrator | 2025-05-31 21:04:30 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:04:33.736200 | orchestrator | 2025-05-31 21:04:33 | INFO  | Task e346174a-3b83-4776-b276-e2a25f5ab226 is in state STARTED 2025-05-31 21:04:33.740761 | orchestrator | 2025-05-31 21:04:33 | INFO  | Task a81d4a59-ff86-4ed5-9241-373b495cc025 is in state SUCCESS 2025-05-31 21:04:33.740838 | orchestrator | 2025-05-31 21:04:33.742491 | orchestrator | 2025-05-31 21:04:33.742513 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-05-31 21:04:33.742520 | orchestrator | 2025-05-31 21:04:33.742526 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-05-31 21:04:33.742532 | orchestrator | Saturday 31 May 2025 21:01:23 +0000 (0:00:00.099) 0:00:00.099 ********** 2025-05-31 21:04:33.742537 | orchestrator | ok: [localhost] => { 2025-05-31 21:04:33.742543 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-05-31 21:04:33.742548 | orchestrator | } 2025-05-31 21:04:33.742553 | orchestrator | 2025-05-31 21:04:33.742558 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-05-31 21:04:33.742563 | orchestrator | Saturday 31 May 2025 21:01:23 +0000 (0:00:00.057) 0:00:00.156 ********** 2025-05-31 21:04:33.742568 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-05-31 21:04:33.742575 | orchestrator | ...ignoring 2025-05-31 21:04:33.742580 | orchestrator | 2025-05-31 21:04:33.742584 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-05-31 21:04:33.742589 | orchestrator | Saturday 31 May 2025 21:01:26 +0000 (0:00:02.839) 0:00:02.996 ********** 2025-05-31 21:04:33.742594 | orchestrator | skipping: [localhost] 2025-05-31 21:04:33.742598 | orchestrator | 2025-05-31 21:04:33.742603 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-05-31 21:04:33.742607 | orchestrator | Saturday 31 May 2025 21:01:26 +0000 (0:00:00.057) 0:00:03.053 ********** 2025-05-31 21:04:33.742612 | orchestrator | ok: [localhost] 2025-05-31 21:04:33.742617 | orchestrator | 2025-05-31 21:04:33.742621 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-31 21:04:33.742643 | orchestrator | 2025-05-31 21:04:33.742648 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-31 21:04:33.742653 | orchestrator | Saturday 31 May 2025 21:01:27 +0000 (0:00:00.172) 0:00:03.226 ********** 2025-05-31 21:04:33.742657 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:04:33.742662 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:04:33.742667 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:04:33.742671 | orchestrator | 2025-05-31 21:04:33.742676 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-31 21:04:33.742680 | orchestrator | Saturday 31 May 2025 21:01:27 +0000 (0:00:00.350) 0:00:03.577 ********** 2025-05-31 21:04:33.742685 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-05-31 21:04:33.742700 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-05-31 21:04:33.742704 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-05-31 21:04:33.742709 | orchestrator | 2025-05-31 21:04:33.742713 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-05-31 21:04:33.742718 | orchestrator | 2025-05-31 21:04:33.742723 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-05-31 21:04:33.742727 | orchestrator | Saturday 31 May 2025 21:01:28 +0000 (0:00:00.910) 0:00:04.488 ********** 2025-05-31 21:04:33.742732 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-31 21:04:33.742737 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-05-31 21:04:33.742741 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-05-31 21:04:33.742746 | orchestrator | 2025-05-31 21:04:33.742750 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-05-31 21:04:33.742755 | orchestrator | Saturday 31 May 2025 21:01:28 +0000 (0:00:00.383) 0:00:04.871 ********** 2025-05-31 21:04:33.742759 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 21:04:33.742765 | orchestrator | 2025-05-31 21:04:33.742770 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-05-31 21:04:33.742774 | orchestrator | Saturday 31 May 2025 21:01:29 +0000 (0:00:00.511) 0:00:05.383 ********** 2025-05-31 21:04:33.742792 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-31 21:04:33.742807 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-31 21:04:33.742813 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-31 21:04:33.742818 | orchestrator | 2025-05-31 21:04:33.742825 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-05-31 21:04:33.742830 | orchestrator | Saturday 31 May 2025 21:01:32 +0000 (0:00:03.398) 0:00:08.781 ********** 2025-05-31 21:04:33.742835 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:04:33.742840 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:04:33.742847 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:04:33.742886 | orchestrator | 2025-05-31 21:04:33.742892 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-05-31 21:04:33.742897 | orchestrator | Saturday 31 May 2025 21:01:33 +0000 (0:00:00.965) 0:00:09.747 ********** 2025-05-31 21:04:33.742901 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:04:33.742905 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:04:33.742910 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:04:33.742914 | orchestrator | 2025-05-31 21:04:33.742919 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-05-31 21:04:33.742923 | orchestrator | Saturday 31 May 2025 21:01:35 +0000 (0:00:01.602) 0:00:11.350 ********** 2025-05-31 21:04:33.742931 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-31 21:04:33.742940 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-31 21:04:33.742953 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-31 21:04:33.742958 | orchestrator | 2025-05-31 21:04:33.742963 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-05-31 21:04:33.742967 | orchestrator | Saturday 31 May 2025 21:01:38 +0000 (0:00:03.442) 0:00:14.792 ********** 2025-05-31 21:04:33.743112 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:04:33.743120 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:04:33.743124 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:04:33.743129 | orchestrator | 2025-05-31 21:04:33.743133 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-05-31 21:04:33.743138 | orchestrator | Saturday 31 May 2025 21:01:39 +0000 (0:00:01.107) 0:00:15.900 ********** 2025-05-31 21:04:33.743142 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:04:33.743146 | orchestrator | changed: [testbed-node-1] 2025-05-31 21:04:33.743151 | orchestrator | changed: [testbed-node-2] 2025-05-31 21:04:33.743155 | orchestrator | 2025-05-31 21:04:33.743159 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-05-31 21:04:33.743164 | orchestrator | Saturday 31 May 2025 21:01:43 +0000 (0:00:04.225) 0:00:20.126 ********** 2025-05-31 21:04:33.743168 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 21:04:33.743173 | orchestrator | 2025-05-31 21:04:33.743177 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-05-31 21:04:33.743181 | orchestrator | Saturday 31 May 2025 21:01:44 +0000 (0:00:00.488) 0:00:20.614 ********** 2025-05-31 21:04:33.743191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-31 21:04:33.743202 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:04:33.743210 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-31 21:04:33.743215 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:04:33.743223 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-31 21:04:33.743234 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:04:33.743238 | orchestrator | 2025-05-31 21:04:33.743242 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-05-31 21:04:33.743247 | orchestrator | Saturday 31 May 2025 21:01:47 +0000 (0:00:03.402) 0:00:24.016 ********** 2025-05-31 21:04:33.743254 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-31 21:04:33.743259 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:04:33.743266 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-31 21:04:33.743275 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:04:33.743282 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-31 21:04:33.743287 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:04:33.743291 | orchestrator | 2025-05-31 21:04:33.743295 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-05-31 21:04:33.743300 | orchestrator | Saturday 31 May 2025 21:01:51 +0000 (0:00:03.163) 0:00:27.180 ********** 2025-05-31 21:04:33.743304 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-31 21:04:33.743316 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:04:33.743327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-31 21:04:33.743332 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:04:33.743337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-31 21:04:33.743345 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:04:33.743349 | orchestrator | 2025-05-31 21:04:33.743354 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-05-31 21:04:33.743358 | orchestrator | Saturday 31 May 2025 21:01:53 +0000 (0:00:02.841) 0:00:30.022 ********** 2025-05-31 21:04:33.743367 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-31 21:04:33.743374 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-31 21:04:33.743386 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-31 21:04:33.743391 | orchestrator | 2025-05-31 21:04:33.743395 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-05-31 21:04:33.743400 | orchestrator | Saturday 31 May 2025 21:01:57 +0000 (0:00:04.030) 0:00:34.052 ********** 2025-05-31 21:04:33.743404 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:04:33.743408 | orchestrator | changed: [testbed-node-1] 2025-05-31 21:04:33.743413 | orchestrator | changed: [testbed-node-2] 2025-05-31 21:04:33.743417 | orchestrator | 2025-05-31 21:04:33.743421 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-05-31 21:04:33.743425 | orchestrator | Saturday 31 May 2025 21:01:59 +0000 (0:00:01.283) 0:00:35.336 ********** 2025-05-31 21:04:33.743430 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:04:33.743434 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:04:33.743439 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:04:33.743443 | orchestrator | 2025-05-31 21:04:33.743447 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-05-31 21:04:33.743454 | orchestrator | Saturday 31 May 2025 21:01:59 +0000 (0:00:00.336) 0:00:35.673 ********** 2025-05-31 21:04:33.743458 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:04:33.743462 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:04:33.743467 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:04:33.743471 | orchestrator | 2025-05-31 21:04:33.743475 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-05-31 21:04:33.743483 | orchestrator | Saturday 31 May 2025 21:01:59 +0000 (0:00:00.318) 0:00:35.992 ********** 2025-05-31 21:04:33.743488 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-05-31 21:04:33.743492 | orchestrator | ...ignoring 2025-05-31 21:04:33.743497 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-05-31 21:04:33.743501 | orchestrator | ...ignoring 2025-05-31 21:04:33.743506 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-05-31 21:04:33.743510 | orchestrator | ...ignoring 2025-05-31 21:04:33.743514 | orchestrator | 2025-05-31 21:04:33.743519 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-05-31 21:04:33.743523 | orchestrator | Saturday 31 May 2025 21:02:10 +0000 (0:00:10.929) 0:00:46.922 ********** 2025-05-31 21:04:33.743527 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:04:33.743532 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:04:33.743536 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:04:33.743540 | orchestrator | 2025-05-31 21:04:33.743545 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-05-31 21:04:33.743549 | orchestrator | Saturday 31 May 2025 21:02:11 +0000 (0:00:00.650) 0:00:47.573 ********** 2025-05-31 21:04:33.743553 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:04:33.743558 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:04:33.743562 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:04:33.743566 | orchestrator | 2025-05-31 21:04:33.743570 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-05-31 21:04:33.743575 | orchestrator | Saturday 31 May 2025 21:02:11 +0000 (0:00:00.404) 0:00:47.977 ********** 2025-05-31 21:04:33.743579 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:04:33.743583 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:04:33.743587 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:04:33.743592 | orchestrator | 2025-05-31 21:04:33.743596 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-05-31 21:04:33.743600 | orchestrator | Saturday 31 May 2025 21:02:12 +0000 (0:00:00.398) 0:00:48.376 ********** 2025-05-31 21:04:33.743605 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:04:33.743609 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:04:33.743613 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:04:33.743617 | orchestrator | 2025-05-31 21:04:33.743622 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-05-31 21:04:33.743626 | orchestrator | Saturday 31 May 2025 21:02:12 +0000 (0:00:00.406) 0:00:48.783 ********** 2025-05-31 21:04:33.743630 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:04:33.743635 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:04:33.743639 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:04:33.743643 | orchestrator | 2025-05-31 21:04:33.743647 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-05-31 21:04:33.743652 | orchestrator | Saturday 31 May 2025 21:02:13 +0000 (0:00:00.617) 0:00:49.400 ********** 2025-05-31 21:04:33.743658 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:04:33.743663 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:04:33.743667 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:04:33.743671 | orchestrator | 2025-05-31 21:04:33.743676 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-05-31 21:04:33.743680 | orchestrator | Saturday 31 May 2025 21:02:13 +0000 (0:00:00.412) 0:00:49.813 ********** 2025-05-31 21:04:33.743684 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:04:33.743689 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:04:33.743693 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-05-31 21:04:33.743697 | orchestrator | 2025-05-31 21:04:33.743702 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-05-31 21:04:33.743709 | orchestrator | Saturday 31 May 2025 21:02:14 +0000 (0:00:00.359) 0:00:50.172 ********** 2025-05-31 21:04:33.743713 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:04:33.743718 | orchestrator | 2025-05-31 21:04:33.743722 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-05-31 21:04:33.743726 | orchestrator | Saturday 31 May 2025 21:02:23 +0000 (0:00:09.809) 0:00:59.982 ********** 2025-05-31 21:04:33.743731 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:04:33.743735 | orchestrator | 2025-05-31 21:04:33.743739 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-05-31 21:04:33.743744 | orchestrator | Saturday 31 May 2025 21:02:23 +0000 (0:00:00.127) 0:01:00.109 ********** 2025-05-31 21:04:33.743750 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:04:33.743754 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:04:33.743759 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:04:33.743764 | orchestrator | 2025-05-31 21:04:33.743769 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-05-31 21:04:33.743774 | orchestrator | Saturday 31 May 2025 21:02:24 +0000 (0:00:00.997) 0:01:01.107 ********** 2025-05-31 21:04:33.743778 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:04:33.743783 | orchestrator | 2025-05-31 21:04:33.743788 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-05-31 21:04:33.743793 | orchestrator | Saturday 31 May 2025 21:02:32 +0000 (0:00:07.592) 0:01:08.699 ********** 2025-05-31 21:04:33.743798 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:04:33.743803 | orchestrator | 2025-05-31 21:04:33.743807 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-05-31 21:04:33.743813 | orchestrator | Saturday 31 May 2025 21:02:34 +0000 (0:00:01.535) 0:01:10.234 ********** 2025-05-31 21:04:33.743817 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:04:33.743823 | orchestrator | 2025-05-31 21:04:33.743830 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-05-31 21:04:33.743835 | orchestrator | Saturday 31 May 2025 21:02:36 +0000 (0:00:02.505) 0:01:12.740 ********** 2025-05-31 21:04:33.743840 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:04:33.743845 | orchestrator | 2025-05-31 21:04:33.743850 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-05-31 21:04:33.743868 | orchestrator | Saturday 31 May 2025 21:02:36 +0000 (0:00:00.117) 0:01:12.857 ********** 2025-05-31 21:04:33.743873 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:04:33.743878 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:04:33.743883 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:04:33.743888 | orchestrator | 2025-05-31 21:04:33.743893 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-05-31 21:04:33.743898 | orchestrator | Saturday 31 May 2025 21:02:37 +0000 (0:00:00.498) 0:01:13.356 ********** 2025-05-31 21:04:33.743903 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:04:33.743908 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-05-31 21:04:33.743913 | orchestrator | changed: [testbed-node-1] 2025-05-31 21:04:33.743917 | orchestrator | changed: [testbed-node-2] 2025-05-31 21:04:33.743922 | orchestrator | 2025-05-31 21:04:33.743927 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-05-31 21:04:33.743932 | orchestrator | skipping: no hosts matched 2025-05-31 21:04:33.743937 | orchestrator | 2025-05-31 21:04:33.743942 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-05-31 21:04:33.743948 | orchestrator | 2025-05-31 21:04:33.743953 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-05-31 21:04:33.743957 | orchestrator | Saturday 31 May 2025 21:02:37 +0000 (0:00:00.333) 0:01:13.689 ********** 2025-05-31 21:04:33.743962 | orchestrator | changed: [testbed-node-1] 2025-05-31 21:04:33.743966 | orchestrator | 2025-05-31 21:04:33.743970 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-05-31 21:04:33.743978 | orchestrator | Saturday 31 May 2025 21:02:57 +0000 (0:00:19.855) 0:01:33.545 ********** 2025-05-31 21:04:33.743982 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:04:33.743986 | orchestrator | 2025-05-31 21:04:33.743991 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-05-31 21:04:33.743995 | orchestrator | Saturday 31 May 2025 21:03:17 +0000 (0:00:20.569) 0:01:54.114 ********** 2025-05-31 21:04:33.743999 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:04:33.744004 | orchestrator | 2025-05-31 21:04:33.744008 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-05-31 21:04:33.744012 | orchestrator | 2025-05-31 21:04:33.744017 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-05-31 21:04:33.744021 | orchestrator | Saturday 31 May 2025 21:03:20 +0000 (0:00:02.473) 0:01:56.588 ********** 2025-05-31 21:04:33.744026 | orchestrator | changed: [testbed-node-2] 2025-05-31 21:04:33.744030 | orchestrator | 2025-05-31 21:04:33.744034 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-05-31 21:04:33.744039 | orchestrator | Saturday 31 May 2025 21:03:45 +0000 (0:00:24.928) 0:02:21.516 ********** 2025-05-31 21:04:33.744043 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:04:33.744047 | orchestrator | 2025-05-31 21:04:33.744051 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-05-31 21:04:33.744056 | orchestrator | Saturday 31 May 2025 21:04:00 +0000 (0:00:15.616) 0:02:37.133 ********** 2025-05-31 21:04:33.744060 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:04:33.744064 | orchestrator | 2025-05-31 21:04:33.744069 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-05-31 21:04:33.744073 | orchestrator | 2025-05-31 21:04:33.744080 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-05-31 21:04:33.744085 | orchestrator | Saturday 31 May 2025 21:04:03 +0000 (0:00:02.554) 0:02:39.688 ********** 2025-05-31 21:04:33.744089 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:04:33.744093 | orchestrator | 2025-05-31 21:04:33.744098 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-05-31 21:04:33.744102 | orchestrator | Saturday 31 May 2025 21:04:18 +0000 (0:00:14.498) 0:02:54.186 ********** 2025-05-31 21:04:33.744106 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:04:33.744111 | orchestrator | 2025-05-31 21:04:33.744115 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-05-31 21:04:33.744119 | orchestrator | Saturday 31 May 2025 21:04:18 +0000 (0:00:00.585) 0:02:54.772 ********** 2025-05-31 21:04:33.744124 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:04:33.744128 | orchestrator | 2025-05-31 21:04:33.744132 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-05-31 21:04:33.744137 | orchestrator | 2025-05-31 21:04:33.744141 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-05-31 21:04:33.744145 | orchestrator | Saturday 31 May 2025 21:04:20 +0000 (0:00:02.374) 0:02:57.146 ********** 2025-05-31 21:04:33.744150 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 21:04:33.744154 | orchestrator | 2025-05-31 21:04:33.744158 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-05-31 21:04:33.744163 | orchestrator | Saturday 31 May 2025 21:04:21 +0000 (0:00:00.510) 0:02:57.656 ********** 2025-05-31 21:04:33.744167 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:04:33.744171 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:04:33.744176 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:04:33.744180 | orchestrator | 2025-05-31 21:04:33.744184 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-05-31 21:04:33.744189 | orchestrator | Saturday 31 May 2025 21:04:23 +0000 (0:00:02.230) 0:02:59.887 ********** 2025-05-31 21:04:33.744193 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:04:33.744197 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:04:33.744202 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:04:33.744206 | orchestrator | 2025-05-31 21:04:33.744210 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-05-31 21:04:33.744218 | orchestrator | Saturday 31 May 2025 21:04:25 +0000 (0:00:02.059) 0:03:01.947 ********** 2025-05-31 21:04:33.744222 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:04:33.744227 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:04:33.744231 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:04:33.744235 | orchestrator | 2025-05-31 21:04:33.744242 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-05-31 21:04:33.744246 | orchestrator | Saturday 31 May 2025 21:04:27 +0000 (0:00:02.049) 0:03:03.996 ********** 2025-05-31 21:04:33.744251 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:04:33.744255 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:04:33.744259 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:04:33.744264 | orchestrator | 2025-05-31 21:04:33.744268 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-05-31 21:04:33.744272 | orchestrator | Saturday 31 May 2025 21:04:29 +0000 (0:00:02.001) 0:03:05.998 ********** 2025-05-31 21:04:33.744277 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:04:33.744281 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:04:33.744285 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:04:33.744290 | orchestrator | 2025-05-31 21:04:33.744294 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-05-31 21:04:33.744298 | orchestrator | Saturday 31 May 2025 21:04:32 +0000 (0:00:02.829) 0:03:08.827 ********** 2025-05-31 21:04:33.744303 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:04:33.744307 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:04:33.744311 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:04:33.744316 | orchestrator | 2025-05-31 21:04:33.744320 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 21:04:33.744325 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-05-31 21:04:33.744329 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2025-05-31 21:04:33.744335 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-05-31 21:04:33.744339 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-05-31 21:04:33.744344 | orchestrator | 2025-05-31 21:04:33.744348 | orchestrator | 2025-05-31 21:04:33.744352 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-31 21:04:33.744357 | orchestrator | Saturday 31 May 2025 21:04:32 +0000 (0:00:00.208) 0:03:09.035 ********** 2025-05-31 21:04:33.744361 | orchestrator | =============================================================================== 2025-05-31 21:04:33.744365 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 44.78s 2025-05-31 21:04:33.744370 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 36.19s 2025-05-31 21:04:33.744374 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 14.50s 2025-05-31 21:04:33.744378 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.93s 2025-05-31 21:04:33.744383 | orchestrator | mariadb : Running MariaDB bootstrap container --------------------------- 9.81s 2025-05-31 21:04:33.744387 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.59s 2025-05-31 21:04:33.744394 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.03s 2025-05-31 21:04:33.744398 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.23s 2025-05-31 21:04:33.744403 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 4.03s 2025-05-31 21:04:33.744407 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.44s 2025-05-31 21:04:33.744415 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.40s 2025-05-31 21:04:33.744419 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.40s 2025-05-31 21:04:33.744423 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 3.16s 2025-05-31 21:04:33.744428 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.84s 2025-05-31 21:04:33.744432 | orchestrator | Check MariaDB service --------------------------------------------------- 2.84s 2025-05-31 21:04:33.744436 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.83s 2025-05-31 21:04:33.744441 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.51s 2025-05-31 21:04:33.744445 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.37s 2025-05-31 21:04:33.744450 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.23s 2025-05-31 21:04:33.744454 | orchestrator | mariadb : Creating mysql monitor user ----------------------------------- 2.06s 2025-05-31 21:04:33.744458 | orchestrator | 2025-05-31 21:04:33 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:04:36.798609 | orchestrator | 2025-05-31 21:04:36 | INFO  | Task e346174a-3b83-4776-b276-e2a25f5ab226 is in state STARTED 2025-05-31 21:04:36.800673 | orchestrator | 2025-05-31 21:04:36 | INFO  | Task 8e4fdc63-10f4-4e3b-a009-7d7c1a02fb9d is in state STARTED 2025-05-31 21:04:36.802164 | orchestrator | 2025-05-31 21:04:36 | INFO  | Task 0dc408f5-5766-4286-9789-2291b25cddf6 is in state STARTED 2025-05-31 21:04:36.802574 | orchestrator | 2025-05-31 21:04:36 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:04:39.838478 | orchestrator | 2025-05-31 21:04:39 | INFO  | Task e346174a-3b83-4776-b276-e2a25f5ab226 is in state STARTED 2025-05-31 21:04:39.839545 | orchestrator | 2025-05-31 21:04:39 | INFO  | Task 8e4fdc63-10f4-4e3b-a009-7d7c1a02fb9d is in state STARTED 2025-05-31 21:04:39.842202 | orchestrator | 2025-05-31 21:04:39 | INFO  | Task 0dc408f5-5766-4286-9789-2291b25cddf6 is in state STARTED 2025-05-31 21:04:39.842448 | orchestrator | 2025-05-31 21:04:39 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:04:42.895055 | orchestrator | 2025-05-31 21:04:42 | INFO  | Task e346174a-3b83-4776-b276-e2a25f5ab226 is in state STARTED 2025-05-31 21:04:42.897058 | orchestrator | 2025-05-31 21:04:42 | INFO  | Task 8e4fdc63-10f4-4e3b-a009-7d7c1a02fb9d is in state STARTED 2025-05-31 21:04:42.898377 | orchestrator | 2025-05-31 21:04:42 | INFO  | Task 0dc408f5-5766-4286-9789-2291b25cddf6 is in state STARTED 2025-05-31 21:04:42.898667 | orchestrator | 2025-05-31 21:04:42 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:04:45.945825 | orchestrator | 2025-05-31 21:04:45 | INFO  | Task e346174a-3b83-4776-b276-e2a25f5ab226 is in state STARTED 2025-05-31 21:04:45.946004 | orchestrator | 2025-05-31 21:04:45 | INFO  | Task 8e4fdc63-10f4-4e3b-a009-7d7c1a02fb9d is in state STARTED 2025-05-31 21:04:45.946055 | orchestrator | 2025-05-31 21:04:45 | INFO  | Task 0dc408f5-5766-4286-9789-2291b25cddf6 is in state STARTED 2025-05-31 21:04:45.946068 | orchestrator | 2025-05-31 21:04:45 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:04:48.986209 | orchestrator | 2025-05-31 21:04:48 | INFO  | Task e346174a-3b83-4776-b276-e2a25f5ab226 is in state STARTED 2025-05-31 21:04:48.988440 | orchestrator | 2025-05-31 21:04:48 | INFO  | Task 8e4fdc63-10f4-4e3b-a009-7d7c1a02fb9d is in state STARTED 2025-05-31 21:04:48.990743 | orchestrator | 2025-05-31 21:04:48 | INFO  | Task 0dc408f5-5766-4286-9789-2291b25cddf6 is in state STARTED 2025-05-31 21:04:48.990831 | orchestrator | 2025-05-31 21:04:48 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:04:52.039095 | orchestrator | 2025-05-31 21:04:52 | INFO  | Task e346174a-3b83-4776-b276-e2a25f5ab226 is in state STARTED 2025-05-31 21:04:52.039608 | orchestrator | 2025-05-31 21:04:52 | INFO  | Task 8e4fdc63-10f4-4e3b-a009-7d7c1a02fb9d is in state STARTED 2025-05-31 21:04:52.042185 | orchestrator | 2025-05-31 21:04:52 | INFO  | Task 0dc408f5-5766-4286-9789-2291b25cddf6 is in state STARTED 2025-05-31 21:04:52.042221 | orchestrator | 2025-05-31 21:04:52 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:04:55.088122 | orchestrator | 2025-05-31 21:04:55 | INFO  | Task e346174a-3b83-4776-b276-e2a25f5ab226 is in state STARTED 2025-05-31 21:04:55.088359 | orchestrator | 2025-05-31 21:04:55 | INFO  | Task 8e4fdc63-10f4-4e3b-a009-7d7c1a02fb9d is in state STARTED 2025-05-31 21:04:55.091770 | orchestrator | 2025-05-31 21:04:55 | INFO  | Task 0dc408f5-5766-4286-9789-2291b25cddf6 is in state STARTED 2025-05-31 21:04:55.091817 | orchestrator | 2025-05-31 21:04:55 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:04:58.133626 | orchestrator | 2025-05-31 21:04:58 | INFO  | Task e346174a-3b83-4776-b276-e2a25f5ab226 is in state STARTED 2025-05-31 21:04:58.133728 | orchestrator | 2025-05-31 21:04:58 | INFO  | Task 8e4fdc63-10f4-4e3b-a009-7d7c1a02fb9d is in state STARTED 2025-05-31 21:04:58.134315 | orchestrator | 2025-05-31 21:04:58 | INFO  | Task 0dc408f5-5766-4286-9789-2291b25cddf6 is in state STARTED 2025-05-31 21:04:58.134445 | orchestrator | 2025-05-31 21:04:58 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:05:01.179068 | orchestrator | 2025-05-31 21:05:01 | INFO  | Task e346174a-3b83-4776-b276-e2a25f5ab226 is in state STARTED 2025-05-31 21:05:01.179466 | orchestrator | 2025-05-31 21:05:01 | INFO  | Task 8e4fdc63-10f4-4e3b-a009-7d7c1a02fb9d is in state STARTED 2025-05-31 21:05:01.183229 | orchestrator | 2025-05-31 21:05:01 | INFO  | Task 0dc408f5-5766-4286-9789-2291b25cddf6 is in state STARTED 2025-05-31 21:05:01.183303 | orchestrator | 2025-05-31 21:05:01 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:05:04.224536 | orchestrator | 2025-05-31 21:05:04 | INFO  | Task e346174a-3b83-4776-b276-e2a25f5ab226 is in state STARTED 2025-05-31 21:05:04.227429 | orchestrator | 2025-05-31 21:05:04 | INFO  | Task 8e4fdc63-10f4-4e3b-a009-7d7c1a02fb9d is in state STARTED 2025-05-31 21:05:04.228499 | orchestrator | 2025-05-31 21:05:04 | INFO  | Task 0dc408f5-5766-4286-9789-2291b25cddf6 is in state STARTED 2025-05-31 21:05:04.229436 | orchestrator | 2025-05-31 21:05:04 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:05:07.269543 | orchestrator | 2025-05-31 21:05:07 | INFO  | Task e346174a-3b83-4776-b276-e2a25f5ab226 is in state STARTED 2025-05-31 21:05:07.271308 | orchestrator | 2025-05-31 21:05:07 | INFO  | Task 8e4fdc63-10f4-4e3b-a009-7d7c1a02fb9d is in state STARTED 2025-05-31 21:05:07.273216 | orchestrator | 2025-05-31 21:05:07 | INFO  | Task 0dc408f5-5766-4286-9789-2291b25cddf6 is in state STARTED 2025-05-31 21:05:07.273463 | orchestrator | 2025-05-31 21:05:07 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:05:10.313084 | orchestrator | 2025-05-31 21:05:10 | INFO  | Task e346174a-3b83-4776-b276-e2a25f5ab226 is in state STARTED 2025-05-31 21:05:10.314136 | orchestrator | 2025-05-31 21:05:10 | INFO  | Task 8e4fdc63-10f4-4e3b-a009-7d7c1a02fb9d is in state STARTED 2025-05-31 21:05:10.315264 | orchestrator | 2025-05-31 21:05:10 | INFO  | Task 0dc408f5-5766-4286-9789-2291b25cddf6 is in state STARTED 2025-05-31 21:05:10.316161 | orchestrator | 2025-05-31 21:05:10 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:05:13.361335 | orchestrator | 2025-05-31 21:05:13 | INFO  | Task e346174a-3b83-4776-b276-e2a25f5ab226 is in state STARTED 2025-05-31 21:05:13.362224 | orchestrator | 2025-05-31 21:05:13 | INFO  | Task 8e4fdc63-10f4-4e3b-a009-7d7c1a02fb9d is in state STARTED 2025-05-31 21:05:13.364389 | orchestrator | 2025-05-31 21:05:13 | INFO  | Task 0dc408f5-5766-4286-9789-2291b25cddf6 is in state STARTED 2025-05-31 21:05:13.364421 | orchestrator | 2025-05-31 21:05:13 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:05:16.412294 | orchestrator | 2025-05-31 21:05:16 | INFO  | Task e346174a-3b83-4776-b276-e2a25f5ab226 is in state STARTED 2025-05-31 21:05:16.415120 | orchestrator | 2025-05-31 21:05:16 | INFO  | Task 8e4fdc63-10f4-4e3b-a009-7d7c1a02fb9d is in state STARTED 2025-05-31 21:05:16.417032 | orchestrator | 2025-05-31 21:05:16 | INFO  | Task 0dc408f5-5766-4286-9789-2291b25cddf6 is in state STARTED 2025-05-31 21:05:16.417083 | orchestrator | 2025-05-31 21:05:16 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:05:19.462405 | orchestrator | 2025-05-31 21:05:19 | INFO  | Task e346174a-3b83-4776-b276-e2a25f5ab226 is in state STARTED 2025-05-31 21:05:19.464100 | orchestrator | 2025-05-31 21:05:19 | INFO  | Task 8e4fdc63-10f4-4e3b-a009-7d7c1a02fb9d is in state STARTED 2025-05-31 21:05:19.466345 | orchestrator | 2025-05-31 21:05:19 | INFO  | Task 0dc408f5-5766-4286-9789-2291b25cddf6 is in state STARTED 2025-05-31 21:05:19.466529 | orchestrator | 2025-05-31 21:05:19 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:05:22.507585 | orchestrator | 2025-05-31 21:05:22 | INFO  | Task e346174a-3b83-4776-b276-e2a25f5ab226 is in state STARTED 2025-05-31 21:05:22.511061 | orchestrator | 2025-05-31 21:05:22 | INFO  | Task 8e4fdc63-10f4-4e3b-a009-7d7c1a02fb9d is in state STARTED 2025-05-31 21:05:22.512831 | orchestrator | 2025-05-31 21:05:22 | INFO  | Task 0dc408f5-5766-4286-9789-2291b25cddf6 is in state STARTED 2025-05-31 21:05:22.513207 | orchestrator | 2025-05-31 21:05:22 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:05:25.583916 | orchestrator | 2025-05-31 21:05:25 | INFO  | Task e346174a-3b83-4776-b276-e2a25f5ab226 is in state STARTED 2025-05-31 21:05:25.584757 | orchestrator | 2025-05-31 21:05:25 | INFO  | Task 8e4fdc63-10f4-4e3b-a009-7d7c1a02fb9d is in state STARTED 2025-05-31 21:05:25.587340 | orchestrator | 2025-05-31 21:05:25 | INFO  | Task 0dc408f5-5766-4286-9789-2291b25cddf6 is in state STARTED 2025-05-31 21:05:25.588336 | orchestrator | 2025-05-31 21:05:25 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:05:28.630324 | orchestrator | 2025-05-31 21:05:28 | INFO  | Task e346174a-3b83-4776-b276-e2a25f5ab226 is in state STARTED 2025-05-31 21:05:28.631685 | orchestrator | 2025-05-31 21:05:28 | INFO  | Task 8e4fdc63-10f4-4e3b-a009-7d7c1a02fb9d is in state STARTED 2025-05-31 21:05:28.633595 | orchestrator | 2025-05-31 21:05:28 | INFO  | Task 0dc408f5-5766-4286-9789-2291b25cddf6 is in state STARTED 2025-05-31 21:05:28.633630 | orchestrator | 2025-05-31 21:05:28 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:05:31.676784 | orchestrator | 2025-05-31 21:05:31 | INFO  | Task e346174a-3b83-4776-b276-e2a25f5ab226 is in state SUCCESS 2025-05-31 21:05:31.678244 | orchestrator | 2025-05-31 21:05:31.678286 | orchestrator | 2025-05-31 21:05:31.678435 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-05-31 21:05:31.678507 | orchestrator | 2025-05-31 21:05:31.679202 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-05-31 21:05:31.679233 | orchestrator | Saturday 31 May 2025 21:03:27 +0000 (0:00:00.591) 0:00:00.591 ********** 2025-05-31 21:05:31.679251 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 21:05:31.679300 | orchestrator | 2025-05-31 21:05:31.679321 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-05-31 21:05:31.679359 | orchestrator | Saturday 31 May 2025 21:03:28 +0000 (0:00:00.620) 0:00:01.212 ********** 2025-05-31 21:05:31.679378 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:05:31.679398 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:05:31.679412 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:05:31.679423 | orchestrator | 2025-05-31 21:05:31.679434 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-05-31 21:05:31.679445 | orchestrator | Saturday 31 May 2025 21:03:29 +0000 (0:00:00.635) 0:00:01.847 ********** 2025-05-31 21:05:31.679456 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:05:31.679466 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:05:31.679477 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:05:31.679487 | orchestrator | 2025-05-31 21:05:31.679526 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-05-31 21:05:31.679537 | orchestrator | Saturday 31 May 2025 21:03:29 +0000 (0:00:00.276) 0:00:02.124 ********** 2025-05-31 21:05:31.679548 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:05:31.679559 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:05:31.679569 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:05:31.679579 | orchestrator | 2025-05-31 21:05:31.679590 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-05-31 21:05:31.679601 | orchestrator | Saturday 31 May 2025 21:03:30 +0000 (0:00:00.753) 0:00:02.878 ********** 2025-05-31 21:05:31.679612 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:05:31.679622 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:05:31.679632 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:05:31.679643 | orchestrator | 2025-05-31 21:05:31.679654 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-05-31 21:05:31.679664 | orchestrator | Saturday 31 May 2025 21:03:30 +0000 (0:00:00.322) 0:00:03.201 ********** 2025-05-31 21:05:31.679675 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:05:31.679685 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:05:31.679696 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:05:31.679706 | orchestrator | 2025-05-31 21:05:31.679717 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-05-31 21:05:31.679727 | orchestrator | Saturday 31 May 2025 21:03:30 +0000 (0:00:00.321) 0:00:03.523 ********** 2025-05-31 21:05:31.679738 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:05:31.679748 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:05:31.679760 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:05:31.679772 | orchestrator | 2025-05-31 21:05:31.679785 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-05-31 21:05:31.679798 | orchestrator | Saturday 31 May 2025 21:03:31 +0000 (0:00:00.390) 0:00:03.913 ********** 2025-05-31 21:05:31.679811 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:05:31.679824 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:05:31.679836 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:05:31.679849 | orchestrator | 2025-05-31 21:05:31.679985 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-05-31 21:05:31.679999 | orchestrator | Saturday 31 May 2025 21:03:31 +0000 (0:00:00.525) 0:00:04.439 ********** 2025-05-31 21:05:31.680011 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:05:31.680023 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:05:31.680035 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:05:31.680047 | orchestrator | 2025-05-31 21:05:31.680059 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-05-31 21:05:31.680072 | orchestrator | Saturday 31 May 2025 21:03:32 +0000 (0:00:00.284) 0:00:04.724 ********** 2025-05-31 21:05:31.680084 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-31 21:05:31.680096 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-31 21:05:31.680109 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-31 21:05:31.680131 | orchestrator | 2025-05-31 21:05:31.680142 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-05-31 21:05:31.680153 | orchestrator | Saturday 31 May 2025 21:03:32 +0000 (0:00:00.619) 0:00:05.344 ********** 2025-05-31 21:05:31.680164 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:05:31.680175 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:05:31.680185 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:05:31.680196 | orchestrator | 2025-05-31 21:05:31.680206 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-05-31 21:05:31.680217 | orchestrator | Saturday 31 May 2025 21:03:33 +0000 (0:00:00.415) 0:00:05.759 ********** 2025-05-31 21:05:31.680227 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-31 21:05:31.680238 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-31 21:05:31.680249 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-31 21:05:31.680259 | orchestrator | 2025-05-31 21:05:31.680270 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-05-31 21:05:31.680281 | orchestrator | Saturday 31 May 2025 21:03:35 +0000 (0:00:02.016) 0:00:07.776 ********** 2025-05-31 21:05:31.680291 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-31 21:05:31.680303 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-31 21:05:31.680313 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-31 21:05:31.680324 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:05:31.680335 | orchestrator | 2025-05-31 21:05:31.680345 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-05-31 21:05:31.680400 | orchestrator | Saturday 31 May 2025 21:03:35 +0000 (0:00:00.404) 0:00:08.180 ********** 2025-05-31 21:05:31.680415 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-05-31 21:05:31.680436 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-05-31 21:05:31.680448 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-05-31 21:05:31.680459 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:05:31.680470 | orchestrator | 2025-05-31 21:05:31.680480 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-05-31 21:05:31.680491 | orchestrator | Saturday 31 May 2025 21:03:36 +0000 (0:00:00.778) 0:00:08.959 ********** 2025-05-31 21:05:31.680504 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-31 21:05:31.680517 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-31 21:05:31.680528 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-31 21:05:31.680546 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:05:31.680557 | orchestrator | 2025-05-31 21:05:31.680568 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-05-31 21:05:31.680579 | orchestrator | Saturday 31 May 2025 21:03:36 +0000 (0:00:00.149) 0:00:09.108 ********** 2025-05-31 21:05:31.680592 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'db8945d44067', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-05-31 21:03:33.758130', 'end': '2025-05-31 21:03:33.808620', 'delta': '0:00:00.050490', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['db8945d44067'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-05-31 21:05:31.680606 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '8452a1ebaecc', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-05-31 21:03:34.479859', 'end': '2025-05-31 21:03:34.523112', 'delta': '0:00:00.043253', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['8452a1ebaecc'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-05-31 21:05:31.680656 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '0cb359a735ba', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-05-31 21:03:34.983034', 'end': '2025-05-31 21:03:35.041669', 'delta': '0:00:00.058635', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['0cb359a735ba'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-05-31 21:05:31.680669 | orchestrator | 2025-05-31 21:05:31.680680 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-05-31 21:05:31.680691 | orchestrator | Saturday 31 May 2025 21:03:36 +0000 (0:00:00.366) 0:00:09.474 ********** 2025-05-31 21:05:31.680701 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:05:31.680712 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:05:31.680722 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:05:31.680733 | orchestrator | 2025-05-31 21:05:31.680743 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-05-31 21:05:31.680754 | orchestrator | Saturday 31 May 2025 21:03:37 +0000 (0:00:00.422) 0:00:09.897 ********** 2025-05-31 21:05:31.680764 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-05-31 21:05:31.680775 | orchestrator | 2025-05-31 21:05:31.680786 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-05-31 21:05:31.680796 | orchestrator | Saturday 31 May 2025 21:03:39 +0000 (0:00:02.152) 0:00:12.050 ********** 2025-05-31 21:05:31.680807 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:05:31.680817 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:05:31.680835 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:05:31.680845 | orchestrator | 2025-05-31 21:05:31.680915 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-05-31 21:05:31.680927 | orchestrator | Saturday 31 May 2025 21:03:39 +0000 (0:00:00.293) 0:00:12.344 ********** 2025-05-31 21:05:31.680937 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:05:31.680948 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:05:31.680959 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:05:31.680969 | orchestrator | 2025-05-31 21:05:31.680980 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-05-31 21:05:31.680991 | orchestrator | Saturday 31 May 2025 21:03:40 +0000 (0:00:00.391) 0:00:12.736 ********** 2025-05-31 21:05:31.681001 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:05:31.681012 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:05:31.681022 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:05:31.681033 | orchestrator | 2025-05-31 21:05:31.681044 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-05-31 21:05:31.681055 | orchestrator | Saturday 31 May 2025 21:03:40 +0000 (0:00:00.470) 0:00:13.206 ********** 2025-05-31 21:05:31.681065 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:05:31.681076 | orchestrator | 2025-05-31 21:05:31.681087 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-05-31 21:05:31.681097 | orchestrator | Saturday 31 May 2025 21:03:40 +0000 (0:00:00.138) 0:00:13.345 ********** 2025-05-31 21:05:31.681108 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:05:31.681119 | orchestrator | 2025-05-31 21:05:31.681129 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-05-31 21:05:31.681140 | orchestrator | Saturday 31 May 2025 21:03:40 +0000 (0:00:00.226) 0:00:13.572 ********** 2025-05-31 21:05:31.681150 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:05:31.681161 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:05:31.681171 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:05:31.681182 | orchestrator | 2025-05-31 21:05:31.681193 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-05-31 21:05:31.681203 | orchestrator | Saturday 31 May 2025 21:03:41 +0000 (0:00:00.314) 0:00:13.886 ********** 2025-05-31 21:05:31.681214 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:05:31.681225 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:05:31.681235 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:05:31.681246 | orchestrator | 2025-05-31 21:05:31.681256 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-05-31 21:05:31.681266 | orchestrator | Saturday 31 May 2025 21:03:41 +0000 (0:00:00.299) 0:00:14.186 ********** 2025-05-31 21:05:31.681275 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:05:31.681284 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:05:31.681294 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:05:31.681303 | orchestrator | 2025-05-31 21:05:31.681312 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-05-31 21:05:31.681322 | orchestrator | Saturday 31 May 2025 21:03:42 +0000 (0:00:00.455) 0:00:14.641 ********** 2025-05-31 21:05:31.681331 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:05:31.681341 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:05:31.681350 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:05:31.681359 | orchestrator | 2025-05-31 21:05:31.681369 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-05-31 21:05:31.681378 | orchestrator | Saturday 31 May 2025 21:03:42 +0000 (0:00:00.305) 0:00:14.947 ********** 2025-05-31 21:05:31.681388 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:05:31.681397 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:05:31.681406 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:05:31.681415 | orchestrator | 2025-05-31 21:05:31.681425 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-05-31 21:05:31.681435 | orchestrator | Saturday 31 May 2025 21:03:42 +0000 (0:00:00.310) 0:00:15.257 ********** 2025-05-31 21:05:31.681450 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:05:31.681460 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:05:31.681469 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:05:31.681479 | orchestrator | 2025-05-31 21:05:31.681488 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-05-31 21:05:31.681530 | orchestrator | Saturday 31 May 2025 21:03:42 +0000 (0:00:00.313) 0:00:15.571 ********** 2025-05-31 21:05:31.681541 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:05:31.681551 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:05:31.681560 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:05:31.681571 | orchestrator | 2025-05-31 21:05:31.681586 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-05-31 21:05:31.681601 | orchestrator | Saturday 31 May 2025 21:03:43 +0000 (0:00:00.443) 0:00:16.015 ********** 2025-05-31 21:05:31.681624 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--813d0644--8ada--5e52--b3d8--7484365c4567-osd--block--813d0644--8ada--5e52--b3d8--7484365c4567', 'dm-uuid-LVM-5bKczW7C1VtLl6vPfKyu54CNx9UycXMebUF1ZziT0uwTCM1IDLBKWOEOgMMUJHXU'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-31 21:05:31.681642 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b37e5891--99ec--5ce8--8fa7--674876c21edd-osd--block--b37e5891--99ec--5ce8--8fa7--674876c21edd', 'dm-uuid-LVM-xonbQWC1M8CKH5CqnYuw0xh7m1sgK3W0tCmLhupcrZbovffqTpDunDtXxV6VUE2K'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-31 21:05:31.681659 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 21:05:31.681675 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 21:05:31.681692 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 21:05:31.681702 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 21:05:31.681713 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 21:05:31.681760 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 21:05:31.681771 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 21:05:31.681786 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 21:05:31.681801 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8bbc83dc-8efd-4101-bb1b-0d6e4523fdf4', 'scsi-SQEMU_QEMU_HARDDISK_8bbc83dc-8efd-4101-bb1b-0d6e4523fdf4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8bbc83dc-8efd-4101-bb1b-0d6e4523fdf4-part1', 'scsi-SQEMU_QEMU_HARDDISK_8bbc83dc-8efd-4101-bb1b-0d6e4523fdf4-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8bbc83dc-8efd-4101-bb1b-0d6e4523fdf4-part14', 'scsi-SQEMU_QEMU_HARDDISK_8bbc83dc-8efd-4101-bb1b-0d6e4523fdf4-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8bbc83dc-8efd-4101-bb1b-0d6e4523fdf4-part15', 'scsi-SQEMU_QEMU_HARDDISK_8bbc83dc-8efd-4101-bb1b-0d6e4523fdf4-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8bbc83dc-8efd-4101-bb1b-0d6e4523fdf4-part16', 'scsi-SQEMU_QEMU_HARDDISK_8bbc83dc-8efd-4101-bb1b-0d6e4523fdf4-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-31 21:05:31.681815 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7717ad38--094f--5aa6--8c39--f28029f817d5-osd--block--7717ad38--094f--5aa6--8c39--f28029f817d5', 'dm-uuid-LVM-rqk0PFqpYlxzpDIf4x9vdQLuz8Lss3aL12rFSpi6N5KHRdKqji4pQySOOFHq07NU'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-31 21:05:31.681946 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--813d0644--8ada--5e52--b3d8--7484365c4567-osd--block--813d0644--8ada--5e52--b3d8--7484365c4567'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-GFSk2q-BuGm-dqSF-bLh2-DAxl-hoCw-hQLzSv', 'scsi-0QEMU_QEMU_HARDDISK_191d8892-ecee-415a-8f71-2d93b7558573', 'scsi-SQEMU_QEMU_HARDDISK_191d8892-ecee-415a-8f71-2d93b7558573'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-31 21:05:31.681970 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6fa9e552--f12f--547e--b45f--d034b93383af-osd--block--6fa9e552--f12f--547e--b45f--d034b93383af', 'dm-uuid-LVM-VNe4guLBo3JKak4Y0eQw8GQ34xS5HfNgeX5kCgBNXSepANCzMeTln6kCKPHEyQOa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-31 21:05:31.681981 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--b37e5891--99ec--5ce8--8fa7--674876c21edd-osd--block--b37e5891--99ec--5ce8--8fa7--674876c21edd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-729xGU-lsbd-9XIq-kmAp-1FWn-1Oxj-b66mfM', 'scsi-0QEMU_QEMU_HARDDISK_fb66f732-34d2-45e3-b1b8-d9ba2a3ac758', 'scsi-SQEMU_QEMU_HARDDISK_fb66f732-34d2-45e3-b1b8-d9ba2a3ac758'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-31 21:05:31.681992 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 21:05:31.682003 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5a7b16a5-b25a-49dc-b8e1-bfe6cbb00610', 'scsi-SQEMU_QEMU_HARDDISK_5a7b16a5-b25a-49dc-b8e1-bfe6cbb00610'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-31 21:05:31.682052 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 21:05:31.682074 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-31-19-16-09-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-31 21:05:31.682120 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 21:05:31.682132 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:05:31.682153 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 21:05:31.682163 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 21:05:31.682173 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 21:05:31.682183 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 21:05:31.682193 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 21:05:31.682211 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_525ad027-7e06-4e85-bfbd-c3ec419229c5', 'scsi-SQEMU_QEMU_HARDDISK_525ad027-7e06-4e85-bfbd-c3ec419229c5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_525ad027-7e06-4e85-bfbd-c3ec419229c5-part1', 'scsi-SQEMU_QEMU_HARDDISK_525ad027-7e06-4e85-bfbd-c3ec419229c5-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_525ad027-7e06-4e85-bfbd-c3ec419229c5-part14', 'scsi-SQEMU_QEMU_HARDDISK_525ad027-7e06-4e85-bfbd-c3ec419229c5-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_525ad027-7e06-4e85-bfbd-c3ec419229c5-part15', 'scsi-SQEMU_QEMU_HARDDISK_525ad027-7e06-4e85-bfbd-c3ec419229c5-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_525ad027-7e06-4e85-bfbd-c3ec419229c5-part16', 'scsi-SQEMU_QEMU_HARDDISK_525ad027-7e06-4e85-bfbd-c3ec419229c5-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-31 21:05:31.682237 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--7717ad38--094f--5aa6--8c39--f28029f817d5-osd--block--7717ad38--094f--5aa6--8c39--f28029f817d5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-RvL6nI-efKD-n08O-aERP-reg2-htTb-PCRtWf', 'scsi-0QEMU_QEMU_HARDDISK_a9241271-625e-4229-94b1-3d99bba363ae', 'scsi-SQEMU_QEMU_HARDDISK_a9241271-625e-4229-94b1-3d99bba363ae'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-31 21:05:31.682249 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--6fa9e552--f12f--547e--b45f--d034b93383af-osd--block--6fa9e552--f12f--547e--b45f--d034b93383af'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-zzTtsS-E409-dm9d-MlZ3-FUhB-HMBq-AagOCA', 'scsi-0QEMU_QEMU_HARDDISK_1a9ee9a4-914c-40fd-b835-c38474fb60e8', 'scsi-SQEMU_QEMU_HARDDISK_1a9ee9a4-914c-40fd-b835-c38474fb60e8'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-31 21:05:31.682260 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9b14a296-0b0f-456e-ac69-f453c0a27a39', 'scsi-SQEMU_QEMU_HARDDISK_9b14a296-0b0f-456e-ac69-f453c0a27a39'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-31 21:05:31.682270 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-31-19-16-02-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-31 21:05:31.682286 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:05:31.682296 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--edfa5e9a--3f1a--54c1--83f4--345bb781a14b-osd--block--edfa5e9a--3f1a--54c1--83f4--345bb781a14b', 'dm-uuid-LVM-SorhS3YnnzfqLsHFgec6B7zbheRJ3TQle3cHcPsL0QlTUfAhaqCIQE3oS8Ac1I4s'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-31 21:05:31.682315 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a23536e0--7351--5f09--a3c0--98b1bc7f8fff-osd--block--a23536e0--7351--5f09--a3c0--98b1bc7f8fff', 'dm-uuid-LVM-7sjYFXr122RUfTn8ayUVUcjwjrsm5zStAZfHFxIaU6C0z0vjASgZwS5CY2oeReQU'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-31 21:05:31.682326 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 21:05:31.682336 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 21:05:31.682346 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 21:05:31.682356 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 21:05:31.682366 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 21:05:31.682376 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 21:05:31.682391 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 21:05:31.682401 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-31 21:05:31.682545 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fbeeb0f2-d22c-4c2b-ae21-9c150f637ac0', 'scsi-SQEMU_QEMU_HARDDISK_fbeeb0f2-d22c-4c2b-ae21-9c150f637ac0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fbeeb0f2-d22c-4c2b-ae21-9c150f637ac0-part1', 'scsi-SQEMU_QEMU_HARDDISK_fbeeb0f2-d22c-4c2b-ae21-9c150f637ac0-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fbeeb0f2-d22c-4c2b-ae21-9c150f637ac0-part14', 'scsi-SQEMU_QEMU_HARDDISK_fbeeb0f2-d22c-4c2b-ae21-9c150f637ac0-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fbeeb0f2-d22c-4c2b-ae21-9c150f637ac0-part15', 'scsi-SQEMU_QEMU_HARDDISK_fbeeb0f2-d22c-4c2b-ae21-9c150f637ac0-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fbeeb0f2-d22c-4c2b-ae21-9c150f637ac0-part16', 'scsi-SQEMU_QEMU_HARDDISK_fbeeb0f2-d22c-4c2b-ae21-9c150f637ac0-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-31 21:05:31.682565 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--edfa5e9a--3f1a--54c1--83f4--345bb781a14b-osd--block--edfa5e9a--3f1a--54c1--83f4--345bb781a14b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-2ewEst-EnqW-BGCE-wqOa-iD2n-MpCt-rUsJeG', 'scsi-0QEMU_QEMU_HARDDISK_6d52f885-97ca-45c7-bd6a-7862e27ed465', 'scsi-SQEMU_QEMU_HARDDISK_6d52f885-97ca-45c7-bd6a-7862e27ed465'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-31 21:05:31.682583 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--a23536e0--7351--5f09--a3c0--98b1bc7f8fff-osd--block--a23536e0--7351--5f09--a3c0--98b1bc7f8fff'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-vFimBm-nggg-lxYO-Mu1y-5ASg-Qvo5-TxbLvb', 'scsi-0QEMU_QEMU_HARDDISK_727d26bd-0ead-422c-920c-32fac6429b39', 'scsi-SQEMU_QEMU_HARDDISK_727d26bd-0ead-422c-920c-32fac6429b39'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-31 21:05:31.682596 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4f6392d-f8e1-4809-8c10-779f08f2c642', 'scsi-SQEMU_QEMU_HARDDISK_d4f6392d-f8e1-4809-8c10-779f08f2c642'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-31 21:05:31.682616 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-31-19-16-13-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-31 21:05:31.682628 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:05:31.682639 | orchestrator | 2025-05-31 21:05:31.682650 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-05-31 21:05:31.682661 | orchestrator | Saturday 31 May 2025 21:03:43 +0000 (0:00:00.531) 0:00:16.547 ********** 2025-05-31 21:05:31.682677 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--813d0644--8ada--5e52--b3d8--7484365c4567-osd--block--813d0644--8ada--5e52--b3d8--7484365c4567', 'dm-uuid-LVM-5bKczW7C1VtLl6vPfKyu54CNx9UycXMebUF1ZziT0uwTCM1IDLBKWOEOgMMUJHXU'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:05:31.682691 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b37e5891--99ec--5ce8--8fa7--674876c21edd-osd--block--b37e5891--99ec--5ce8--8fa7--674876c21edd', 'dm-uuid-LVM-xonbQWC1M8CKH5CqnYuw0xh7m1sgK3W0tCmLhupcrZbovffqTpDunDtXxV6VUE2K'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:05:31.682702 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:05:31.682720 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:05:31.682732 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:05:31.682751 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:05:31.682768 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:05:31.682780 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:05:31.682791 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:05:31.682810 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7717ad38--094f--5aa6--8c39--f28029f817d5-osd--block--7717ad38--094f--5aa6--8c39--f28029f817d5', 'dm-uuid-LVM-rqk0PFqpYlxzpDIf4x9vdQLuz8Lss3aL12rFSpi6N5KHRdKqji4pQySOOFHq07NU'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:05:31.682820 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:05:31.682836 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--6fa9e552--f12f--547e--b45f--d034b93383af-osd--block--6fa9e552--f12f--547e--b45f--d034b93383af', 'dm-uuid-LVM-VNe4guLBo3JKak4Y0eQw8GQ34xS5HfNgeX5kCgBNXSepANCzMeTln6kCKPHEyQOa'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:05:31.682870 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8bbc83dc-8efd-4101-bb1b-0d6e4523fdf4', 'scsi-SQEMU_QEMU_HARDDISK_8bbc83dc-8efd-4101-bb1b-0d6e4523fdf4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8bbc83dc-8efd-4101-bb1b-0d6e4523fdf4-part1', 'scsi-SQEMU_QEMU_HARDDISK_8bbc83dc-8efd-4101-bb1b-0d6e4523fdf4-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8bbc83dc-8efd-4101-bb1b-0d6e4523fdf4-part14', 'scsi-SQEMU_QEMU_HARDDISK_8bbc83dc-8efd-4101-bb1b-0d6e4523fdf4-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8bbc83dc-8efd-4101-bb1b-0d6e4523fdf4-part15', 'scsi-SQEMU_QEMU_HARDDISK_8bbc83dc-8efd-4101-bb1b-0d6e4523fdf4-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8bbc83dc-8efd-4101-bb1b-0d6e4523fdf4-part16', 'scsi-SQEMU_QEMU_HARDDISK_8bbc83dc-8efd-4101-bb1b-0d6e4523fdf4-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:05:31.682889 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:05:31.682900 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--813d0644--8ada--5e52--b3d8--7484365c4567-osd--block--813d0644--8ada--5e52--b3d8--7484365c4567'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-GFSk2q-BuGm-dqSF-bLh2-DAxl-hoCw-hQLzSv', 'scsi-0QEMU_QEMU_HARDDISK_191d8892-ecee-415a-8f71-2d93b7558573', 'scsi-SQEMU_QEMU_HARDDISK_191d8892-ecee-415a-8f71-2d93b7558573'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:05:31.682921 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:05:31.682932 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--b37e5891--99ec--5ce8--8fa7--674876c21edd-osd--block--b37e5891--99ec--5ce8--8fa7--674876c21edd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-729xGU-lsbd-9XIq-kmAp-1FWn-1Oxj-b66mfM', 'scsi-0QEMU_QEMU_HARDDISK_fb66f732-34d2-45e3-b1b8-d9ba2a3ac758', 'scsi-SQEMU_QEMU_HARDDISK_fb66f732-34d2-45e3-b1b8-d9ba2a3ac758'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:05:31.682952 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:05:31.682962 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5a7b16a5-b25a-49dc-b8e1-bfe6cbb00610', 'scsi-SQEMU_QEMU_HARDDISK_5a7b16a5-b25a-49dc-b8e1-bfe6cbb00610'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:05:31.682972 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:05:31.682989 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-31-19-16-09-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:05:31.683003 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:05:31.683014 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:05:31.683029 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:05:31.683039 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:05:31.683049 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:05:31.683072 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_525ad027-7e06-4e85-bfbd-c3ec419229c5', 'scsi-SQEMU_QEMU_HARDDISK_525ad027-7e06-4e85-bfbd-c3ec419229c5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_525ad027-7e06-4e85-bfbd-c3ec419229c5-part1', 'scsi-SQEMU_QEMU_HARDDISK_525ad027-7e06-4e85-bfbd-c3ec419229c5-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_525ad027-7e06-4e85-bfbd-c3ec419229c5-part14', 'scsi-SQEMU_QEMU_HARDDISK_525ad027-7e06-4e85-bfbd-c3ec419229c5-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_525ad027-7e06-4e85-bfbd-c3ec419229c5-part15', 'scsi-SQEMU_QEMU_HARDDISK_525ad027-7e06-4e85-bfbd-c3ec419229c5-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_525ad027-7e06-4e85-bfbd-c3ec419229c5-part16', 'scsi-SQEMU_QEMU_HARDDISK_525ad027-7e06-4e85-bfbd-c3ec419229c5-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:05:31.683084 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--7717ad38--094f--5aa6--8c39--f28029f817d5-osd--block--7717ad38--094f--5aa6--8c39--f28029f817d5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-RvL6nI-efKD-n08O-aERP-reg2-htTb-PCRtWf', 'scsi-0QEMU_QEMU_HARDDISK_a9241271-625e-4229-94b1-3d99bba363ae', 'scsi-SQEMU_QEMU_HARDDISK_a9241271-625e-4229-94b1-3d99bba363ae'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:05:31.683101 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--6fa9e552--f12f--547e--b45f--d034b93383af-osd--block--6fa9e552--f12f--547e--b45f--d034b93383af'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-zzTtsS-E409-dm9d-MlZ3-FUhB-HMBq-AagOCA', 'scsi-0QEMU_QEMU_HARDDISK_1a9ee9a4-914c-40fd-b835-c38474fb60e8', 'scsi-SQEMU_QEMU_HARDDISK_1a9ee9a4-914c-40fd-b835-c38474fb60e8'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:05:31.683112 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--edfa5e9a--3f1a--54c1--83f4--345bb781a14b-osd--block--edfa5e9a--3f1a--54c1--83f4--345bb781a14b', 'dm-uuid-LVM-SorhS3YnnzfqLsHFgec6B7zbheRJ3TQle3cHcPsL0QlTUfAhaqCIQE3oS8Ac1I4s'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:05:31.683130 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_9b14a296-0b0f-456e-ac69-f453c0a27a39', 'scsi-SQEMU_QEMU_HARDDISK_9b14a296-0b0f-456e-ac69-f453c0a27a39'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:05:31.683145 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a23536e0--7351--5f09--a3c0--98b1bc7f8fff-osd--block--a23536e0--7351--5f09--a3c0--98b1bc7f8fff', 'dm-uuid-LVM-7sjYFXr122RUfTn8ayUVUcjwjrsm5zStAZfHFxIaU6C0z0vjASgZwS5CY2oeReQU'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:05:31.683161 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-31-19-16-02-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:05:31.683171 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:05:31.683181 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:05:31.683192 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:05:31.683202 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:05:31.683216 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:05:31.683231 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:05:31.683241 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:05:31.683257 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:05:31.683267 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:05:31.683288 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fbeeb0f2-d22c-4c2b-ae21-9c150f637ac0', 'scsi-SQEMU_QEMU_HARDDISK_fbeeb0f2-d22c-4c2b-ae21-9c150f637ac0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fbeeb0f2-d22c-4c2b-ae21-9c150f637ac0-part1', 'scsi-SQEMU_QEMU_HARDDISK_fbeeb0f2-d22c-4c2b-ae21-9c150f637ac0-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fbeeb0f2-d22c-4c2b-ae21-9c150f637ac0-part14', 'scsi-SQEMU_QEMU_HARDDISK_fbeeb0f2-d22c-4c2b-ae21-9c150f637ac0-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fbeeb0f2-d22c-4c2b-ae21-9c150f637ac0-part15', 'scsi-SQEMU_QEMU_HARDDISK_fbeeb0f2-d22c-4c2b-ae21-9c150f637ac0-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fbeeb0f2-d22c-4c2b-ae21-9c150f637ac0-part16', 'scsi-SQEMU_QEMU_HARDDISK_fbeeb0f2-d22c-4c2b-ae21-9c150f637ac0-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:05:31.683305 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--edfa5e9a--3f1a--54c1--83f4--345bb781a14b-osd--block--edfa5e9a--3f1a--54c1--83f4--345bb781a14b'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-2ewEst-EnqW-BGCE-wqOa-iD2n-MpCt-rUsJeG', 'scsi-0QEMU_QEMU_HARDDISK_6d52f885-97ca-45c7-bd6a-7862e27ed465', 'scsi-SQEMU_QEMU_HARDDISK_6d52f885-97ca-45c7-bd6a-7862e27ed465'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:05:31.683316 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--a23536e0--7351--5f09--a3c0--98b1bc7f8fff-osd--block--a23536e0--7351--5f09--a3c0--98b1bc7f8fff'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-vFimBm-nggg-lxYO-Mu1y-5ASg-Qvo5-TxbLvb', 'scsi-0QEMU_QEMU_HARDDISK_727d26bd-0ead-422c-920c-32fac6429b39', 'scsi-SQEMU_QEMU_HARDDISK_727d26bd-0ead-422c-920c-32fac6429b39'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:05:31.683326 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d4f6392d-f8e1-4809-8c10-779f08f2c642', 'scsi-SQEMU_QEMU_HARDDISK_d4f6392d-f8e1-4809-8c10-779f08f2c642'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:05:31.683342 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-31-19-16-13-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-05-31 21:05:31.683353 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:05:31.683362 | orchestrator | 2025-05-31 21:05:31.683372 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-05-31 21:05:31.683386 | orchestrator | Saturday 31 May 2025 21:03:44 +0000 (0:00:00.564) 0:00:17.112 ********** 2025-05-31 21:05:31.683396 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:05:31.683411 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:05:31.683421 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:05:31.683430 | orchestrator | 2025-05-31 21:05:31.683440 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-05-31 21:05:31.683450 | orchestrator | Saturday 31 May 2025 21:03:45 +0000 (0:00:00.680) 0:00:17.792 ********** 2025-05-31 21:05:31.683459 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:05:31.683468 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:05:31.683478 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:05:31.683487 | orchestrator | 2025-05-31 21:05:31.683496 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-05-31 21:05:31.683506 | orchestrator | Saturday 31 May 2025 21:03:45 +0000 (0:00:00.453) 0:00:18.246 ********** 2025-05-31 21:05:31.683515 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:05:31.683525 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:05:31.683534 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:05:31.683543 | orchestrator | 2025-05-31 21:05:31.683553 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-05-31 21:05:31.683562 | orchestrator | Saturday 31 May 2025 21:03:46 +0000 (0:00:00.633) 0:00:18.879 ********** 2025-05-31 21:05:31.683572 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:05:31.683581 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:05:31.683590 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:05:31.683600 | orchestrator | 2025-05-31 21:05:31.683609 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-05-31 21:05:31.683619 | orchestrator | Saturday 31 May 2025 21:03:46 +0000 (0:00:00.281) 0:00:19.161 ********** 2025-05-31 21:05:31.683628 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:05:31.683637 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:05:31.683647 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:05:31.683656 | orchestrator | 2025-05-31 21:05:31.683665 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-05-31 21:05:31.683675 | orchestrator | Saturday 31 May 2025 21:03:46 +0000 (0:00:00.374) 0:00:19.536 ********** 2025-05-31 21:05:31.683684 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:05:31.683694 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:05:31.683703 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:05:31.683713 | orchestrator | 2025-05-31 21:05:31.683722 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-05-31 21:05:31.683732 | orchestrator | Saturday 31 May 2025 21:03:47 +0000 (0:00:00.464) 0:00:20.000 ********** 2025-05-31 21:05:31.683741 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-05-31 21:05:31.683751 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-05-31 21:05:31.683761 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-05-31 21:05:31.683770 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-05-31 21:05:31.683780 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-05-31 21:05:31.683789 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-05-31 21:05:31.683798 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-05-31 21:05:31.683808 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-05-31 21:05:31.683817 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-05-31 21:05:31.683827 | orchestrator | 2025-05-31 21:05:31.683836 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-05-31 21:05:31.683846 | orchestrator | Saturday 31 May 2025 21:03:48 +0000 (0:00:00.803) 0:00:20.804 ********** 2025-05-31 21:05:31.683906 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-31 21:05:31.683917 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-31 21:05:31.683926 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-31 21:05:31.683936 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:05:31.683945 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-31 21:05:31.683955 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-31 21:05:31.683971 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-31 21:05:31.683981 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:05:31.683990 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-31 21:05:31.683999 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-31 21:05:31.684009 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-31 21:05:31.684018 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:05:31.684028 | orchestrator | 2025-05-31 21:05:31.684037 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-05-31 21:05:31.684047 | orchestrator | Saturday 31 May 2025 21:03:48 +0000 (0:00:00.325) 0:00:21.130 ********** 2025-05-31 21:05:31.684057 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 21:05:31.684067 | orchestrator | 2025-05-31 21:05:31.684076 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-31 21:05:31.684087 | orchestrator | Saturday 31 May 2025 21:03:49 +0000 (0:00:00.653) 0:00:21.783 ********** 2025-05-31 21:05:31.684096 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:05:31.684106 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:05:31.684116 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:05:31.684125 | orchestrator | 2025-05-31 21:05:31.684140 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-31 21:05:31.684151 | orchestrator | Saturday 31 May 2025 21:03:49 +0000 (0:00:00.346) 0:00:22.129 ********** 2025-05-31 21:05:31.684160 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:05:31.684170 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:05:31.684179 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:05:31.684189 | orchestrator | 2025-05-31 21:05:31.684198 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-31 21:05:31.684208 | orchestrator | Saturday 31 May 2025 21:03:49 +0000 (0:00:00.280) 0:00:22.410 ********** 2025-05-31 21:05:31.684222 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:05:31.684232 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:05:31.684242 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:05:31.684251 | orchestrator | 2025-05-31 21:05:31.684260 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-05-31 21:05:31.684270 | orchestrator | Saturday 31 May 2025 21:03:50 +0000 (0:00:00.306) 0:00:22.717 ********** 2025-05-31 21:05:31.684280 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:05:31.684289 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:05:31.684299 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:05:31.684308 | orchestrator | 2025-05-31 21:05:31.684318 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-05-31 21:05:31.684327 | orchestrator | Saturday 31 May 2025 21:03:50 +0000 (0:00:00.553) 0:00:23.271 ********** 2025-05-31 21:05:31.684337 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-31 21:05:31.684346 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-31 21:05:31.684356 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-31 21:05:31.684365 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:05:31.684375 | orchestrator | 2025-05-31 21:05:31.684384 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-31 21:05:31.684394 | orchestrator | Saturday 31 May 2025 21:03:51 +0000 (0:00:00.367) 0:00:23.638 ********** 2025-05-31 21:05:31.684403 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-31 21:05:31.684413 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-31 21:05:31.684422 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-31 21:05:31.684430 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:05:31.684438 | orchestrator | 2025-05-31 21:05:31.684446 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-31 21:05:31.684461 | orchestrator | Saturday 31 May 2025 21:03:51 +0000 (0:00:00.349) 0:00:23.988 ********** 2025-05-31 21:05:31.684468 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-31 21:05:31.684476 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-31 21:05:31.684484 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-31 21:05:31.684492 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:05:31.684500 | orchestrator | 2025-05-31 21:05:31.684507 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-05-31 21:05:31.684515 | orchestrator | Saturday 31 May 2025 21:03:51 +0000 (0:00:00.356) 0:00:24.344 ********** 2025-05-31 21:05:31.684523 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:05:31.684531 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:05:31.684539 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:05:31.684546 | orchestrator | 2025-05-31 21:05:31.684554 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-05-31 21:05:31.684562 | orchestrator | Saturday 31 May 2025 21:03:52 +0000 (0:00:00.299) 0:00:24.643 ********** 2025-05-31 21:05:31.684570 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-05-31 21:05:31.684578 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-05-31 21:05:31.684585 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-05-31 21:05:31.684593 | orchestrator | 2025-05-31 21:05:31.684601 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-05-31 21:05:31.684608 | orchestrator | Saturday 31 May 2025 21:03:52 +0000 (0:00:00.521) 0:00:25.164 ********** 2025-05-31 21:05:31.684616 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-31 21:05:31.684624 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-31 21:05:31.684632 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-31 21:05:31.684639 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-05-31 21:05:31.684647 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-31 21:05:31.684655 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-31 21:05:31.684663 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-31 21:05:31.684671 | orchestrator | 2025-05-31 21:05:31.684678 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-05-31 21:05:31.684686 | orchestrator | Saturday 31 May 2025 21:03:53 +0000 (0:00:00.920) 0:00:26.085 ********** 2025-05-31 21:05:31.684694 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-31 21:05:31.684701 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-31 21:05:31.684709 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-31 21:05:31.684717 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-05-31 21:05:31.684725 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-31 21:05:31.684732 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-31 21:05:31.684740 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-31 21:05:31.684748 | orchestrator | 2025-05-31 21:05:31.684759 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-05-31 21:05:31.684768 | orchestrator | Saturday 31 May 2025 21:03:55 +0000 (0:00:01.794) 0:00:27.880 ********** 2025-05-31 21:05:31.684775 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:05:31.684783 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:05:31.684791 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-05-31 21:05:31.684799 | orchestrator | 2025-05-31 21:05:31.684807 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-05-31 21:05:31.684819 | orchestrator | Saturday 31 May 2025 21:03:55 +0000 (0:00:00.355) 0:00:28.235 ********** 2025-05-31 21:05:31.684832 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-31 21:05:31.684840 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-31 21:05:31.684849 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-31 21:05:31.684870 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-31 21:05:31.684878 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-31 21:05:31.684886 | orchestrator | 2025-05-31 21:05:31.684894 | orchestrator | TASK [generate keys] *********************************************************** 2025-05-31 21:05:31.684902 | orchestrator | Saturday 31 May 2025 21:04:39 +0000 (0:00:43.795) 0:01:12.031 ********** 2025-05-31 21:05:31.684910 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-31 21:05:31.684917 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-31 21:05:31.684925 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-31 21:05:31.684933 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-31 21:05:31.684940 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-31 21:05:31.684948 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-31 21:05:31.684956 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-05-31 21:05:31.684963 | orchestrator | 2025-05-31 21:05:31.684971 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-05-31 21:05:31.684979 | orchestrator | Saturday 31 May 2025 21:05:02 +0000 (0:00:22.787) 0:01:34.819 ********** 2025-05-31 21:05:31.684986 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-31 21:05:31.684994 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-31 21:05:31.685002 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-31 21:05:31.685010 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-31 21:05:31.685018 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-31 21:05:31.685025 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-31 21:05:31.685033 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-31 21:05:31.685041 | orchestrator | 2025-05-31 21:05:31.685049 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-05-31 21:05:31.685056 | orchestrator | Saturday 31 May 2025 21:05:13 +0000 (0:00:11.296) 0:01:46.115 ********** 2025-05-31 21:05:31.685064 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-31 21:05:31.685078 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-31 21:05:31.685086 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-31 21:05:31.685094 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-31 21:05:31.685101 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-31 21:05:31.685109 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-31 21:05:31.685121 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-31 21:05:31.685130 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-31 21:05:31.685137 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-31 21:05:31.685145 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-31 21:05:31.685153 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-31 21:05:31.685164 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-31 21:05:31.685172 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-31 21:05:31.685180 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-31 21:05:31.685187 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-31 21:05:31.685195 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-31 21:05:31.685203 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-31 21:05:31.685210 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-31 21:05:31.685218 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-05-31 21:05:31.685226 | orchestrator | 2025-05-31 21:05:31.685234 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 21:05:31.685245 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2025-05-31 21:05:31.685262 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-05-31 21:05:31.685277 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-05-31 21:05:31.685291 | orchestrator | 2025-05-31 21:05:31.685306 | orchestrator | 2025-05-31 21:05:31.685320 | orchestrator | 2025-05-31 21:05:31.685333 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-31 21:05:31.685342 | orchestrator | Saturday 31 May 2025 21:05:31 +0000 (0:00:17.651) 0:02:03.766 ********** 2025-05-31 21:05:31.685349 | orchestrator | =============================================================================== 2025-05-31 21:05:31.685357 | orchestrator | create openstack pool(s) ----------------------------------------------- 43.80s 2025-05-31 21:05:31.685365 | orchestrator | generate keys ---------------------------------------------------------- 22.79s 2025-05-31 21:05:31.685372 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 17.65s 2025-05-31 21:05:31.685380 | orchestrator | get keys from monitors ------------------------------------------------- 11.30s 2025-05-31 21:05:31.685388 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 2.15s 2025-05-31 21:05:31.685395 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.02s 2025-05-31 21:05:31.685403 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.79s 2025-05-31 21:05:31.685411 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.92s 2025-05-31 21:05:31.685425 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.80s 2025-05-31 21:05:31.685433 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.78s 2025-05-31 21:05:31.685440 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.75s 2025-05-31 21:05:31.685448 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.68s 2025-05-31 21:05:31.685455 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.65s 2025-05-31 21:05:31.685463 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.64s 2025-05-31 21:05:31.685471 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.63s 2025-05-31 21:05:31.685479 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.62s 2025-05-31 21:05:31.685486 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.62s 2025-05-31 21:05:31.685494 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.57s 2025-05-31 21:05:31.685501 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.55s 2025-05-31 21:05:31.685509 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.53s 2025-05-31 21:05:31.685517 | orchestrator | 2025-05-31 21:05:31 | INFO  | Task 8e4fdc63-10f4-4e3b-a009-7d7c1a02fb9d is in state STARTED 2025-05-31 21:05:31.685525 | orchestrator | 2025-05-31 21:05:31 | INFO  | Task 0dc408f5-5766-4286-9789-2291b25cddf6 is in state STARTED 2025-05-31 21:05:31.685533 | orchestrator | 2025-05-31 21:05:31 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:05:34.734934 | orchestrator | 2025-05-31 21:05:34 | INFO  | Task 8e4fdc63-10f4-4e3b-a009-7d7c1a02fb9d is in state STARTED 2025-05-31 21:05:34.736454 | orchestrator | 2025-05-31 21:05:34 | INFO  | Task 57f57ac0-4df4-466e-8c75-e7ae15de47e6 is in state STARTED 2025-05-31 21:05:34.739229 | orchestrator | 2025-05-31 21:05:34 | INFO  | Task 0dc408f5-5766-4286-9789-2291b25cddf6 is in state STARTED 2025-05-31 21:05:34.739283 | orchestrator | 2025-05-31 21:05:34 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:05:37.782990 | orchestrator | 2025-05-31 21:05:37 | INFO  | Task 8e4fdc63-10f4-4e3b-a009-7d7c1a02fb9d is in state STARTED 2025-05-31 21:05:37.784486 | orchestrator | 2025-05-31 21:05:37 | INFO  | Task 57f57ac0-4df4-466e-8c75-e7ae15de47e6 is in state STARTED 2025-05-31 21:05:37.786202 | orchestrator | 2025-05-31 21:05:37 | INFO  | Task 0dc408f5-5766-4286-9789-2291b25cddf6 is in state STARTED 2025-05-31 21:05:37.786227 | orchestrator | 2025-05-31 21:05:37 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:05:40.840505 | orchestrator | 2025-05-31 21:05:40 | INFO  | Task 8e4fdc63-10f4-4e3b-a009-7d7c1a02fb9d is in state STARTED 2025-05-31 21:05:40.841550 | orchestrator | 2025-05-31 21:05:40 | INFO  | Task 57f57ac0-4df4-466e-8c75-e7ae15de47e6 is in state STARTED 2025-05-31 21:05:40.843447 | orchestrator | 2025-05-31 21:05:40 | INFO  | Task 0dc408f5-5766-4286-9789-2291b25cddf6 is in state STARTED 2025-05-31 21:05:40.843483 | orchestrator | 2025-05-31 21:05:40 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:05:43.900719 | orchestrator | 2025-05-31 21:05:43 | INFO  | Task 8e4fdc63-10f4-4e3b-a009-7d7c1a02fb9d is in state STARTED 2025-05-31 21:05:43.902618 | orchestrator | 2025-05-31 21:05:43 | INFO  | Task 57f57ac0-4df4-466e-8c75-e7ae15de47e6 is in state STARTED 2025-05-31 21:05:43.905102 | orchestrator | 2025-05-31 21:05:43 | INFO  | Task 0dc408f5-5766-4286-9789-2291b25cddf6 is in state STARTED 2025-05-31 21:05:43.905130 | orchestrator | 2025-05-31 21:05:43 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:05:46.960104 | orchestrator | 2025-05-31 21:05:46 | INFO  | Task 8e4fdc63-10f4-4e3b-a009-7d7c1a02fb9d is in state STARTED 2025-05-31 21:05:46.961427 | orchestrator | 2025-05-31 21:05:46 | INFO  | Task 57f57ac0-4df4-466e-8c75-e7ae15de47e6 is in state STARTED 2025-05-31 21:05:46.962766 | orchestrator | 2025-05-31 21:05:46 | INFO  | Task 0dc408f5-5766-4286-9789-2291b25cddf6 is in state STARTED 2025-05-31 21:05:46.962796 | orchestrator | 2025-05-31 21:05:46 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:05:50.031322 | orchestrator | 2025-05-31 21:05:50 | INFO  | Task 8e4fdc63-10f4-4e3b-a009-7d7c1a02fb9d is in state STARTED 2025-05-31 21:05:50.033348 | orchestrator | 2025-05-31 21:05:50 | INFO  | Task 57f57ac0-4df4-466e-8c75-e7ae15de47e6 is in state STARTED 2025-05-31 21:05:50.036066 | orchestrator | 2025-05-31 21:05:50 | INFO  | Task 0dc408f5-5766-4286-9789-2291b25cddf6 is in state STARTED 2025-05-31 21:05:50.036357 | orchestrator | 2025-05-31 21:05:50 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:05:53.083410 | orchestrator | 2025-05-31 21:05:53 | INFO  | Task 8e4fdc63-10f4-4e3b-a009-7d7c1a02fb9d is in state STARTED 2025-05-31 21:05:53.084100 | orchestrator | 2025-05-31 21:05:53 | INFO  | Task 57f57ac0-4df4-466e-8c75-e7ae15de47e6 is in state STARTED 2025-05-31 21:05:53.085250 | orchestrator | 2025-05-31 21:05:53 | INFO  | Task 0dc408f5-5766-4286-9789-2291b25cddf6 is in state STARTED 2025-05-31 21:05:53.085373 | orchestrator | 2025-05-31 21:05:53 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:05:56.128357 | orchestrator | 2025-05-31 21:05:56 | INFO  | Task 8e4fdc63-10f4-4e3b-a009-7d7c1a02fb9d is in state STARTED 2025-05-31 21:05:56.129253 | orchestrator | 2025-05-31 21:05:56 | INFO  | Task 57f57ac0-4df4-466e-8c75-e7ae15de47e6 is in state STARTED 2025-05-31 21:05:56.138328 | orchestrator | 2025-05-31 21:05:56 | INFO  | Task 0dc408f5-5766-4286-9789-2291b25cddf6 is in state STARTED 2025-05-31 21:05:56.138416 | orchestrator | 2025-05-31 21:05:56 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:05:59.182152 | orchestrator | 2025-05-31 21:05:59 | INFO  | Task 8e4fdc63-10f4-4e3b-a009-7d7c1a02fb9d is in state STARTED 2025-05-31 21:05:59.183995 | orchestrator | 2025-05-31 21:05:59 | INFO  | Task 57f57ac0-4df4-466e-8c75-e7ae15de47e6 is in state STARTED 2025-05-31 21:05:59.185826 | orchestrator | 2025-05-31 21:05:59 | INFO  | Task 0dc408f5-5766-4286-9789-2291b25cddf6 is in state STARTED 2025-05-31 21:05:59.185965 | orchestrator | 2025-05-31 21:05:59 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:06:02.241275 | orchestrator | 2025-05-31 21:06:02 | INFO  | Task 8e4fdc63-10f4-4e3b-a009-7d7c1a02fb9d is in state STARTED 2025-05-31 21:06:02.243190 | orchestrator | 2025-05-31 21:06:02 | INFO  | Task 57f57ac0-4df4-466e-8c75-e7ae15de47e6 is in state SUCCESS 2025-05-31 21:06:02.245570 | orchestrator | 2025-05-31 21:06:02 | INFO  | Task 564414c9-48ee-472a-8be8-8232cfd43550 is in state STARTED 2025-05-31 21:06:02.247023 | orchestrator | 2025-05-31 21:06:02 | INFO  | Task 0dc408f5-5766-4286-9789-2291b25cddf6 is in state STARTED 2025-05-31 21:06:02.247265 | orchestrator | 2025-05-31 21:06:02 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:06:05.299982 | orchestrator | 2025-05-31 21:06:05 | INFO  | Task 8e4fdc63-10f4-4e3b-a009-7d7c1a02fb9d is in state STARTED 2025-05-31 21:06:05.300907 | orchestrator | 2025-05-31 21:06:05 | INFO  | Task 564414c9-48ee-472a-8be8-8232cfd43550 is in state STARTED 2025-05-31 21:06:05.302056 | orchestrator | 2025-05-31 21:06:05 | INFO  | Task 0dc408f5-5766-4286-9789-2291b25cddf6 is in state STARTED 2025-05-31 21:06:05.302157 | orchestrator | 2025-05-31 21:06:05 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:06:08.345512 | orchestrator | 2025-05-31 21:06:08 | INFO  | Task 8e4fdc63-10f4-4e3b-a009-7d7c1a02fb9d is in state STARTED 2025-05-31 21:06:08.346188 | orchestrator | 2025-05-31 21:06:08 | INFO  | Task 564414c9-48ee-472a-8be8-8232cfd43550 is in state STARTED 2025-05-31 21:06:08.347424 | orchestrator | 2025-05-31 21:06:08 | INFO  | Task 0dc408f5-5766-4286-9789-2291b25cddf6 is in state STARTED 2025-05-31 21:06:08.347470 | orchestrator | 2025-05-31 21:06:08 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:06:11.394258 | orchestrator | 2025-05-31 21:06:11 | INFO  | Task 8e4fdc63-10f4-4e3b-a009-7d7c1a02fb9d is in state STARTED 2025-05-31 21:06:11.396583 | orchestrator | 2025-05-31 21:06:11 | INFO  | Task 564414c9-48ee-472a-8be8-8232cfd43550 is in state STARTED 2025-05-31 21:06:11.398274 | orchestrator | 2025-05-31 21:06:11 | INFO  | Task 0dc408f5-5766-4286-9789-2291b25cddf6 is in state STARTED 2025-05-31 21:06:11.398297 | orchestrator | 2025-05-31 21:06:11 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:06:14.445007 | orchestrator | 2025-05-31 21:06:14 | INFO  | Task 8e4fdc63-10f4-4e3b-a009-7d7c1a02fb9d is in state STARTED 2025-05-31 21:06:14.446918 | orchestrator | 2025-05-31 21:06:14 | INFO  | Task 564414c9-48ee-472a-8be8-8232cfd43550 is in state STARTED 2025-05-31 21:06:14.448530 | orchestrator | 2025-05-31 21:06:14 | INFO  | Task 0dc408f5-5766-4286-9789-2291b25cddf6 is in state STARTED 2025-05-31 21:06:14.448560 | orchestrator | 2025-05-31 21:06:14 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:06:17.485467 | orchestrator | 2025-05-31 21:06:17 | INFO  | Task 8e4fdc63-10f4-4e3b-a009-7d7c1a02fb9d is in state STARTED 2025-05-31 21:06:17.486108 | orchestrator | 2025-05-31 21:06:17 | INFO  | Task 564414c9-48ee-472a-8be8-8232cfd43550 is in state STARTED 2025-05-31 21:06:17.487792 | orchestrator | 2025-05-31 21:06:17 | INFO  | Task 0dc408f5-5766-4286-9789-2291b25cddf6 is in state STARTED 2025-05-31 21:06:17.487930 | orchestrator | 2025-05-31 21:06:17 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:06:20.532606 | orchestrator | 2025-05-31 21:06:20 | INFO  | Task 8e4fdc63-10f4-4e3b-a009-7d7c1a02fb9d is in state STARTED 2025-05-31 21:06:20.533530 | orchestrator | 2025-05-31 21:06:20 | INFO  | Task 564414c9-48ee-472a-8be8-8232cfd43550 is in state STARTED 2025-05-31 21:06:20.535061 | orchestrator | 2025-05-31 21:06:20 | INFO  | Task 0dc408f5-5766-4286-9789-2291b25cddf6 is in state STARTED 2025-05-31 21:06:20.535114 | orchestrator | 2025-05-31 21:06:20 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:06:23.574231 | orchestrator | 2025-05-31 21:06:23 | INFO  | Task 8e4fdc63-10f4-4e3b-a009-7d7c1a02fb9d is in state STARTED 2025-05-31 21:06:23.574727 | orchestrator | 2025-05-31 21:06:23 | INFO  | Task 564414c9-48ee-472a-8be8-8232cfd43550 is in state STARTED 2025-05-31 21:06:23.575798 | orchestrator | 2025-05-31 21:06:23 | INFO  | Task 0dc408f5-5766-4286-9789-2291b25cddf6 is in state STARTED 2025-05-31 21:06:23.575926 | orchestrator | 2025-05-31 21:06:23 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:06:26.622250 | orchestrator | 2025-05-31 21:06:26 | INFO  | Task 8e4fdc63-10f4-4e3b-a009-7d7c1a02fb9d is in state SUCCESS 2025-05-31 21:06:26.623384 | orchestrator | 2025-05-31 21:06:26.623486 | orchestrator | 2025-05-31 21:06:26.623503 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-05-31 21:06:26.623517 | orchestrator | 2025-05-31 21:06:26.623528 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2025-05-31 21:06:26.623540 | orchestrator | Saturday 31 May 2025 21:05:35 +0000 (0:00:00.159) 0:00:00.159 ********** 2025-05-31 21:06:26.623551 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2025-05-31 21:06:26.623599 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-05-31 21:06:26.623610 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-05-31 21:06:26.623621 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2025-05-31 21:06:26.623639 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-05-31 21:06:26.623650 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2025-05-31 21:06:26.623662 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2025-05-31 21:06:26.623686 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2025-05-31 21:06:26.623698 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2025-05-31 21:06:26.623709 | orchestrator | 2025-05-31 21:06:26.623720 | orchestrator | TASK [Create share directory] ************************************************** 2025-05-31 21:06:26.623731 | orchestrator | Saturday 31 May 2025 21:05:39 +0000 (0:00:04.013) 0:00:04.172 ********** 2025-05-31 21:06:26.623742 | orchestrator | changed: [testbed-manager -> localhost] 2025-05-31 21:06:26.623753 | orchestrator | 2025-05-31 21:06:26.623764 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2025-05-31 21:06:26.623775 | orchestrator | Saturday 31 May 2025 21:05:40 +0000 (0:00:00.920) 0:00:05.092 ********** 2025-05-31 21:06:26.623786 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-05-31 21:06:26.623796 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-31 21:06:26.623807 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-31 21:06:26.623818 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-05-31 21:06:26.623828 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-31 21:06:26.623839 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-05-31 21:06:26.623850 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-05-31 21:06:26.623901 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-05-31 21:06:26.623912 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-05-31 21:06:26.623922 | orchestrator | 2025-05-31 21:06:26.623933 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2025-05-31 21:06:26.623944 | orchestrator | Saturday 31 May 2025 21:05:53 +0000 (0:00:12.701) 0:00:17.794 ********** 2025-05-31 21:06:26.623955 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2025-05-31 21:06:26.623966 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-05-31 21:06:26.623977 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-05-31 21:06:26.623988 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2025-05-31 21:06:26.623998 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-05-31 21:06:26.624009 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2025-05-31 21:06:26.624020 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2025-05-31 21:06:26.624030 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2025-05-31 21:06:26.624041 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2025-05-31 21:06:26.624052 | orchestrator | 2025-05-31 21:06:26.624063 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 21:06:26.624074 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 21:06:26.624095 | orchestrator | 2025-05-31 21:06:26.624106 | orchestrator | 2025-05-31 21:06:26.624117 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-31 21:06:26.624128 | orchestrator | Saturday 31 May 2025 21:05:59 +0000 (0:00:06.395) 0:00:24.189 ********** 2025-05-31 21:06:26.624139 | orchestrator | =============================================================================== 2025-05-31 21:06:26.624150 | orchestrator | Write ceph keys to the share directory --------------------------------- 12.70s 2025-05-31 21:06:26.624160 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.40s 2025-05-31 21:06:26.624171 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.01s 2025-05-31 21:06:26.624182 | orchestrator | Create share directory -------------------------------------------------- 0.92s 2025-05-31 21:06:26.624193 | orchestrator | 2025-05-31 21:06:26.624204 | orchestrator | 2025-05-31 21:06:26.624215 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-31 21:06:26.624225 | orchestrator | 2025-05-31 21:06:26.624251 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-31 21:06:26.624262 | orchestrator | Saturday 31 May 2025 21:04:36 +0000 (0:00:00.224) 0:00:00.224 ********** 2025-05-31 21:06:26.624273 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:06:26.624285 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:06:26.624296 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:06:26.624307 | orchestrator | 2025-05-31 21:06:26.624318 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-31 21:06:26.624329 | orchestrator | Saturday 31 May 2025 21:04:37 +0000 (0:00:00.232) 0:00:00.457 ********** 2025-05-31 21:06:26.624340 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-05-31 21:06:26.624351 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-05-31 21:06:26.624362 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-05-31 21:06:26.624373 | orchestrator | 2025-05-31 21:06:26.624384 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-05-31 21:06:26.624395 | orchestrator | 2025-05-31 21:06:26.624406 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-31 21:06:26.624416 | orchestrator | Saturday 31 May 2025 21:04:37 +0000 (0:00:00.319) 0:00:00.777 ********** 2025-05-31 21:06:26.624427 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 21:06:26.624438 | orchestrator | 2025-05-31 21:06:26.624455 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-05-31 21:06:26.624467 | orchestrator | Saturday 31 May 2025 21:04:37 +0000 (0:00:00.421) 0:00:01.198 ********** 2025-05-31 21:06:26.624485 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-31 21:06:26.624530 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-31 21:06:26.624545 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-31 21:06:26.624563 | orchestrator | 2025-05-31 21:06:26.624575 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-05-31 21:06:26.624586 | orchestrator | Saturday 31 May 2025 21:04:38 +0000 (0:00:00.952) 0:00:02.151 ********** 2025-05-31 21:06:26.624597 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:06:26.624607 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:06:26.624619 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:06:26.624630 | orchestrator | 2025-05-31 21:06:26.624641 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-31 21:06:26.624651 | orchestrator | Saturday 31 May 2025 21:04:39 +0000 (0:00:00.362) 0:00:02.514 ********** 2025-05-31 21:06:26.624671 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-05-31 21:06:26.624682 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2025-05-31 21:06:26.624700 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-05-31 21:06:26.624711 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-05-31 21:06:26.624722 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-05-31 21:06:26.624733 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-05-31 21:06:26.624743 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-05-31 21:06:26.624754 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-05-31 21:06:26.624765 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-05-31 21:06:26.624776 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2025-05-31 21:06:26.624787 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-05-31 21:06:26.624798 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-05-31 21:06:26.624809 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-05-31 21:06:26.624824 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-05-31 21:06:26.624835 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-05-31 21:06:26.624846 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-05-31 21:06:26.624913 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-05-31 21:06:26.624932 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2025-05-31 21:06:26.624950 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-05-31 21:06:26.624973 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-05-31 21:06:26.624984 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-05-31 21:06:26.624994 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-05-31 21:06:26.625005 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-05-31 21:06:26.625016 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-05-31 21:06:26.625028 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-05-31 21:06:26.625041 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-05-31 21:06:26.625052 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-05-31 21:06:26.625064 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-05-31 21:06:26.625074 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-05-31 21:06:26.625085 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-05-31 21:06:26.625096 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-05-31 21:06:26.625107 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-05-31 21:06:26.625118 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-05-31 21:06:26.625129 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-05-31 21:06:26.625140 | orchestrator | 2025-05-31 21:06:26.625151 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-31 21:06:26.625161 | orchestrator | Saturday 31 May 2025 21:04:39 +0000 (0:00:00.616) 0:00:03.130 ********** 2025-05-31 21:06:26.625173 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:06:26.625184 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:06:26.625194 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:06:26.625205 | orchestrator | 2025-05-31 21:06:26.625216 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-31 21:06:26.625227 | orchestrator | Saturday 31 May 2025 21:04:40 +0000 (0:00:00.261) 0:00:03.391 ********** 2025-05-31 21:06:26.625238 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:06:26.625248 | orchestrator | 2025-05-31 21:06:26.625259 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-31 21:06:26.625277 | orchestrator | Saturday 31 May 2025 21:04:40 +0000 (0:00:00.116) 0:00:03.507 ********** 2025-05-31 21:06:26.625288 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:06:26.625299 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:06:26.625310 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:06:26.625321 | orchestrator | 2025-05-31 21:06:26.625331 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-31 21:06:26.625342 | orchestrator | Saturday 31 May 2025 21:04:40 +0000 (0:00:00.355) 0:00:03.863 ********** 2025-05-31 21:06:26.625353 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:06:26.625365 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:06:26.625376 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:06:26.625393 | orchestrator | 2025-05-31 21:06:26.625404 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-31 21:06:26.625415 | orchestrator | Saturday 31 May 2025 21:04:40 +0000 (0:00:00.280) 0:00:04.144 ********** 2025-05-31 21:06:26.625426 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:06:26.625437 | orchestrator | 2025-05-31 21:06:26.625448 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-31 21:06:26.625458 | orchestrator | Saturday 31 May 2025 21:04:40 +0000 (0:00:00.100) 0:00:04.245 ********** 2025-05-31 21:06:26.625469 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:06:26.625480 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:06:26.625491 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:06:26.625502 | orchestrator | 2025-05-31 21:06:26.625512 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-31 21:06:26.625529 | orchestrator | Saturday 31 May 2025 21:04:41 +0000 (0:00:00.243) 0:00:04.488 ********** 2025-05-31 21:06:26.625540 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:06:26.625551 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:06:26.625562 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:06:26.625573 | orchestrator | 2025-05-31 21:06:26.625583 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-31 21:06:26.625594 | orchestrator | Saturday 31 May 2025 21:04:41 +0000 (0:00:00.251) 0:00:04.740 ********** 2025-05-31 21:06:26.625605 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:06:26.625616 | orchestrator | 2025-05-31 21:06:26.625626 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-31 21:06:26.625637 | orchestrator | Saturday 31 May 2025 21:04:41 +0000 (0:00:00.195) 0:00:04.935 ********** 2025-05-31 21:06:26.625648 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:06:26.625658 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:06:26.625669 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:06:26.625680 | orchestrator | 2025-05-31 21:06:26.625691 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-31 21:06:26.625701 | orchestrator | Saturday 31 May 2025 21:04:41 +0000 (0:00:00.213) 0:00:05.148 ********** 2025-05-31 21:06:26.625712 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:06:26.625723 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:06:26.625734 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:06:26.625744 | orchestrator | 2025-05-31 21:06:26.625756 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-31 21:06:26.625766 | orchestrator | Saturday 31 May 2025 21:04:42 +0000 (0:00:00.239) 0:00:05.387 ********** 2025-05-31 21:06:26.625777 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:06:26.625795 | orchestrator | 2025-05-31 21:06:26.625806 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-31 21:06:26.625817 | orchestrator | Saturday 31 May 2025 21:04:42 +0000 (0:00:00.097) 0:00:05.485 ********** 2025-05-31 21:06:26.625828 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:06:26.625839 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:06:26.625850 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:06:26.625888 | orchestrator | 2025-05-31 21:06:26.625901 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-31 21:06:26.625911 | orchestrator | Saturday 31 May 2025 21:04:42 +0000 (0:00:00.258) 0:00:05.743 ********** 2025-05-31 21:06:26.625923 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:06:26.625934 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:06:26.625944 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:06:26.625955 | orchestrator | 2025-05-31 21:06:26.625966 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-31 21:06:26.625977 | orchestrator | Saturday 31 May 2025 21:04:42 +0000 (0:00:00.342) 0:00:06.086 ********** 2025-05-31 21:06:26.625987 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:06:26.625998 | orchestrator | 2025-05-31 21:06:26.626009 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-31 21:06:26.626089 | orchestrator | Saturday 31 May 2025 21:04:42 +0000 (0:00:00.129) 0:00:06.215 ********** 2025-05-31 21:06:26.626103 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:06:26.626114 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:06:26.626124 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:06:26.626135 | orchestrator | 2025-05-31 21:06:26.626146 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-31 21:06:26.626165 | orchestrator | Saturday 31 May 2025 21:04:43 +0000 (0:00:00.294) 0:00:06.510 ********** 2025-05-31 21:06:26.626176 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:06:26.626187 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:06:26.626197 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:06:26.626208 | orchestrator | 2025-05-31 21:06:26.626219 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-31 21:06:26.626230 | orchestrator | Saturday 31 May 2025 21:04:43 +0000 (0:00:00.309) 0:00:06.819 ********** 2025-05-31 21:06:26.626240 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:06:26.626251 | orchestrator | 2025-05-31 21:06:26.626262 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-31 21:06:26.626273 | orchestrator | Saturday 31 May 2025 21:04:43 +0000 (0:00:00.130) 0:00:06.949 ********** 2025-05-31 21:06:26.626284 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:06:26.626295 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:06:26.626306 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:06:26.626317 | orchestrator | 2025-05-31 21:06:26.626328 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-31 21:06:26.626338 | orchestrator | Saturday 31 May 2025 21:04:44 +0000 (0:00:00.555) 0:00:07.505 ********** 2025-05-31 21:06:26.626349 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:06:26.626360 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:06:26.626370 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:06:26.626381 | orchestrator | 2025-05-31 21:06:26.626401 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-31 21:06:26.626419 | orchestrator | Saturday 31 May 2025 21:04:44 +0000 (0:00:00.323) 0:00:07.828 ********** 2025-05-31 21:06:26.626438 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:06:26.626455 | orchestrator | 2025-05-31 21:06:26.626476 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-31 21:06:26.626494 | orchestrator | Saturday 31 May 2025 21:04:44 +0000 (0:00:00.138) 0:00:07.966 ********** 2025-05-31 21:06:26.626512 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:06:26.626530 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:06:26.626548 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:06:26.626568 | orchestrator | 2025-05-31 21:06:26.626585 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-31 21:06:26.626604 | orchestrator | Saturday 31 May 2025 21:04:44 +0000 (0:00:00.300) 0:00:08.267 ********** 2025-05-31 21:06:26.626622 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:06:26.626641 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:06:26.626658 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:06:26.626677 | orchestrator | 2025-05-31 21:06:26.626695 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-31 21:06:26.626714 | orchestrator | Saturday 31 May 2025 21:04:45 +0000 (0:00:00.354) 0:00:08.622 ********** 2025-05-31 21:06:26.626733 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:06:26.626752 | orchestrator | 2025-05-31 21:06:26.626781 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-31 21:06:26.626799 | orchestrator | Saturday 31 May 2025 21:04:45 +0000 (0:00:00.145) 0:00:08.768 ********** 2025-05-31 21:06:26.626818 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:06:26.626831 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:06:26.626842 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:06:26.626881 | orchestrator | 2025-05-31 21:06:26.626896 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-31 21:06:26.626906 | orchestrator | Saturday 31 May 2025 21:04:45 +0000 (0:00:00.497) 0:00:09.266 ********** 2025-05-31 21:06:26.626928 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:06:26.626939 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:06:26.626949 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:06:26.626960 | orchestrator | 2025-05-31 21:06:26.626971 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-31 21:06:26.626981 | orchestrator | Saturday 31 May 2025 21:04:46 +0000 (0:00:00.322) 0:00:09.588 ********** 2025-05-31 21:06:26.626992 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:06:26.627003 | orchestrator | 2025-05-31 21:06:26.627014 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-31 21:06:26.627025 | orchestrator | Saturday 31 May 2025 21:04:46 +0000 (0:00:00.128) 0:00:09.717 ********** 2025-05-31 21:06:26.627036 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:06:26.627047 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:06:26.627057 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:06:26.627068 | orchestrator | 2025-05-31 21:06:26.627079 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-31 21:06:26.627090 | orchestrator | Saturday 31 May 2025 21:04:46 +0000 (0:00:00.287) 0:00:10.004 ********** 2025-05-31 21:06:26.627100 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:06:26.627111 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:06:26.627122 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:06:26.627133 | orchestrator | 2025-05-31 21:06:26.627143 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-31 21:06:26.627154 | orchestrator | Saturday 31 May 2025 21:04:47 +0000 (0:00:00.490) 0:00:10.495 ********** 2025-05-31 21:06:26.627165 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:06:26.627176 | orchestrator | 2025-05-31 21:06:26.627187 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-31 21:06:26.627197 | orchestrator | Saturday 31 May 2025 21:04:47 +0000 (0:00:00.141) 0:00:10.637 ********** 2025-05-31 21:06:26.627208 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:06:26.627219 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:06:26.627230 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:06:26.627241 | orchestrator | 2025-05-31 21:06:26.627251 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-05-31 21:06:26.627262 | orchestrator | Saturday 31 May 2025 21:04:47 +0000 (0:00:00.308) 0:00:10.945 ********** 2025-05-31 21:06:26.627273 | orchestrator | changed: [testbed-node-1] 2025-05-31 21:06:26.627284 | orchestrator | changed: [testbed-node-2] 2025-05-31 21:06:26.627295 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:06:26.627305 | orchestrator | 2025-05-31 21:06:26.627316 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-05-31 21:06:26.627326 | orchestrator | Saturday 31 May 2025 21:04:49 +0000 (0:00:01.603) 0:00:12.549 ********** 2025-05-31 21:06:26.627337 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-05-31 21:06:26.627348 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-05-31 21:06:26.627359 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-05-31 21:06:26.627369 | orchestrator | 2025-05-31 21:06:26.627380 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-05-31 21:06:26.627391 | orchestrator | Saturday 31 May 2025 21:04:51 +0000 (0:00:01.837) 0:00:14.387 ********** 2025-05-31 21:06:26.627401 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-05-31 21:06:26.627413 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-05-31 21:06:26.627423 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-05-31 21:06:26.627434 | orchestrator | 2025-05-31 21:06:26.627445 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-05-31 21:06:26.627469 | orchestrator | Saturday 31 May 2025 21:04:53 +0000 (0:00:02.174) 0:00:16.561 ********** 2025-05-31 21:06:26.627492 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-05-31 21:06:26.627503 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-05-31 21:06:26.627514 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-05-31 21:06:26.627525 | orchestrator | 2025-05-31 21:06:26.627537 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-05-31 21:06:26.627556 | orchestrator | Saturday 31 May 2025 21:04:55 +0000 (0:00:01.800) 0:00:18.362 ********** 2025-05-31 21:06:26.627575 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:06:26.627593 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:06:26.627613 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:06:26.627626 | orchestrator | 2025-05-31 21:06:26.627637 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-05-31 21:06:26.627648 | orchestrator | Saturday 31 May 2025 21:04:55 +0000 (0:00:00.280) 0:00:18.643 ********** 2025-05-31 21:06:26.627658 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:06:26.627669 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:06:26.627680 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:06:26.627690 | orchestrator | 2025-05-31 21:06:26.627701 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-31 21:06:26.627718 | orchestrator | Saturday 31 May 2025 21:04:55 +0000 (0:00:00.306) 0:00:18.949 ********** 2025-05-31 21:06:26.627729 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 21:06:26.627740 | orchestrator | 2025-05-31 21:06:26.627751 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-05-31 21:06:26.627762 | orchestrator | Saturday 31 May 2025 21:04:56 +0000 (0:00:00.796) 0:00:19.745 ********** 2025-05-31 21:06:26.627775 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-31 21:06:26.627815 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-31 21:06:26.627829 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-31 21:06:26.627850 | orchestrator | 2025-05-31 21:06:26.627895 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-05-31 21:06:26.627906 | orchestrator | Saturday 31 May 2025 21:04:58 +0000 (0:00:01.613) 0:00:21.359 ********** 2025-05-31 21:06:26.627934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-31 21:06:26.627948 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:06:26.627962 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-31 21:06:26.628018 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-31 21:06:26.628040 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:06:26.628060 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:06:26.628071 | orchestrator | 2025-05-31 21:06:26.628082 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-05-31 21:06:26.628093 | orchestrator | Saturday 31 May 2025 21:04:58 +0000 (0:00:00.667) 0:00:22.026 ********** 2025-05-31 21:06:26.628113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-31 21:06:26.628134 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:06:26.628151 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-31 21:06:26.628164 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:06:26.628184 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-31 21:06:26.628202 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:06:26.628213 | orchestrator | 2025-05-31 21:06:26.628224 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-05-31 21:06:26.628235 | orchestrator | Saturday 31 May 2025 21:04:59 +0000 (0:00:01.014) 0:00:23.041 ********** 2025-05-31 21:06:26.628251 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-31 21:06:26.628286 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-31 21:06:26.628300 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-31 21:06:26.628318 | orchestrator | 2025-05-31 21:06:26.628332 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-31 21:06:26.628351 | orchestrator | Saturday 31 May 2025 21:05:00 +0000 (0:00:01.143) 0:00:24.184 ********** 2025-05-31 21:06:26.628370 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:06:26.628387 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:06:26.628405 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:06:26.628422 | orchestrator | 2025-05-31 21:06:26.628442 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-31 21:06:26.628460 | orchestrator | Saturday 31 May 2025 21:05:01 +0000 (0:00:00.323) 0:00:24.507 ********** 2025-05-31 21:06:26.628478 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 21:06:26.628496 | orchestrator | 2025-05-31 21:06:26.628515 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-05-31 21:06:26.628534 | orchestrator | Saturday 31 May 2025 21:05:01 +0000 (0:00:00.774) 0:00:25.282 ********** 2025-05-31 21:06:26.628553 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:06:26.628572 | orchestrator | 2025-05-31 21:06:26.628601 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-05-31 21:06:26.628621 | orchestrator | Saturday 31 May 2025 21:05:04 +0000 (0:00:02.107) 0:00:27.390 ********** 2025-05-31 21:06:26.628639 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:06:26.628657 | orchestrator | 2025-05-31 21:06:26.628668 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-05-31 21:06:26.628679 | orchestrator | Saturday 31 May 2025 21:05:06 +0000 (0:00:02.037) 0:00:29.427 ********** 2025-05-31 21:06:26.628690 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:06:26.628701 | orchestrator | 2025-05-31 21:06:26.628711 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-05-31 21:06:26.628722 | orchestrator | Saturday 31 May 2025 21:05:20 +0000 (0:00:14.026) 0:00:43.454 ********** 2025-05-31 21:06:26.628733 | orchestrator | 2025-05-31 21:06:26.628744 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-05-31 21:06:26.628754 | orchestrator | Saturday 31 May 2025 21:05:20 +0000 (0:00:00.063) 0:00:43.518 ********** 2025-05-31 21:06:26.628765 | orchestrator | 2025-05-31 21:06:26.628776 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-05-31 21:06:26.628787 | orchestrator | Saturday 31 May 2025 21:05:20 +0000 (0:00:00.068) 0:00:43.586 ********** 2025-05-31 21:06:26.628803 | orchestrator | 2025-05-31 21:06:26.628822 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-05-31 21:06:26.628881 | orchestrator | Saturday 31 May 2025 21:05:20 +0000 (0:00:00.067) 0:00:43.654 ********** 2025-05-31 21:06:26.628900 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:06:26.628911 | orchestrator | changed: [testbed-node-1] 2025-05-31 21:06:26.628922 | orchestrator | changed: [testbed-node-2] 2025-05-31 21:06:26.628936 | orchestrator | 2025-05-31 21:06:26.628954 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 21:06:26.628974 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2025-05-31 21:06:26.629006 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-05-31 21:06:26.629020 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-05-31 21:06:26.629031 | orchestrator | 2025-05-31 21:06:26.629042 | orchestrator | 2025-05-31 21:06:26.629052 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-31 21:06:26.629064 | orchestrator | Saturday 31 May 2025 21:06:25 +0000 (0:01:04.700) 0:01:48.354 ********** 2025-05-31 21:06:26.629075 | orchestrator | =============================================================================== 2025-05-31 21:06:26.629085 | orchestrator | horizon : Restart horizon container ------------------------------------ 64.70s 2025-05-31 21:06:26.629096 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 14.03s 2025-05-31 21:06:26.629106 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.17s 2025-05-31 21:06:26.629117 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.11s 2025-05-31 21:06:26.629127 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.04s 2025-05-31 21:06:26.629138 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.84s 2025-05-31 21:06:26.629149 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.80s 2025-05-31 21:06:26.629159 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.61s 2025-05-31 21:06:26.629170 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.60s 2025-05-31 21:06:26.629180 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.14s 2025-05-31 21:06:26.629191 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.01s 2025-05-31 21:06:26.629201 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 0.95s 2025-05-31 21:06:26.629212 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.80s 2025-05-31 21:06:26.629223 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.77s 2025-05-31 21:06:26.629233 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.67s 2025-05-31 21:06:26.629244 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.62s 2025-05-31 21:06:26.629254 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.56s 2025-05-31 21:06:26.629265 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.50s 2025-05-31 21:06:26.629276 | orchestrator | horizon : Update policy file name --------------------------------------- 0.49s 2025-05-31 21:06:26.629286 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.42s 2025-05-31 21:06:26.629297 | orchestrator | 2025-05-31 21:06:26 | INFO  | Task 564414c9-48ee-472a-8be8-8232cfd43550 is in state STARTED 2025-05-31 21:06:26.629308 | orchestrator | 2025-05-31 21:06:26 | INFO  | Task 0dc408f5-5766-4286-9789-2291b25cddf6 is in state STARTED 2025-05-31 21:06:26.629319 | orchestrator | 2025-05-31 21:06:26 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:06:29.676040 | orchestrator | 2025-05-31 21:06:29 | INFO  | Task 564414c9-48ee-472a-8be8-8232cfd43550 is in state STARTED 2025-05-31 21:06:29.678272 | orchestrator | 2025-05-31 21:06:29 | INFO  | Task 0dc408f5-5766-4286-9789-2291b25cddf6 is in state STARTED 2025-05-31 21:06:29.678309 | orchestrator | 2025-05-31 21:06:29 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:06:32.726157 | orchestrator | 2025-05-31 21:06:32 | INFO  | Task 564414c9-48ee-472a-8be8-8232cfd43550 is in state STARTED 2025-05-31 21:06:32.727672 | orchestrator | 2025-05-31 21:06:32 | INFO  | Task 0dc408f5-5766-4286-9789-2291b25cddf6 is in state STARTED 2025-05-31 21:06:32.727787 | orchestrator | 2025-05-31 21:06:32 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:06:35.771111 | orchestrator | 2025-05-31 21:06:35 | INFO  | Task 564414c9-48ee-472a-8be8-8232cfd43550 is in state STARTED 2025-05-31 21:06:35.772388 | orchestrator | 2025-05-31 21:06:35 | INFO  | Task 0dc408f5-5766-4286-9789-2291b25cddf6 is in state STARTED 2025-05-31 21:06:35.772456 | orchestrator | 2025-05-31 21:06:35 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:06:38.826737 | orchestrator | 2025-05-31 21:06:38 | INFO  | Task 564414c9-48ee-472a-8be8-8232cfd43550 is in state STARTED 2025-05-31 21:06:38.828063 | orchestrator | 2025-05-31 21:06:38 | INFO  | Task 0dc408f5-5766-4286-9789-2291b25cddf6 is in state STARTED 2025-05-31 21:06:38.828112 | orchestrator | 2025-05-31 21:06:38 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:06:41.878112 | orchestrator | 2025-05-31 21:06:41 | INFO  | Task 564414c9-48ee-472a-8be8-8232cfd43550 is in state STARTED 2025-05-31 21:06:41.880522 | orchestrator | 2025-05-31 21:06:41 | INFO  | Task 0dc408f5-5766-4286-9789-2291b25cddf6 is in state STARTED 2025-05-31 21:06:41.880557 | orchestrator | 2025-05-31 21:06:41 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:06:44.934706 | orchestrator | 2025-05-31 21:06:44 | INFO  | Task 564414c9-48ee-472a-8be8-8232cfd43550 is in state STARTED 2025-05-31 21:06:44.935588 | orchestrator | 2025-05-31 21:06:44 | INFO  | Task 0dc408f5-5766-4286-9789-2291b25cddf6 is in state STARTED 2025-05-31 21:06:44.935622 | orchestrator | 2025-05-31 21:06:44 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:06:47.995681 | orchestrator | 2025-05-31 21:06:47 | INFO  | Task 564414c9-48ee-472a-8be8-8232cfd43550 is in state STARTED 2025-05-31 21:06:47.997502 | orchestrator | 2025-05-31 21:06:47 | INFO  | Task 0dc408f5-5766-4286-9789-2291b25cddf6 is in state STARTED 2025-05-31 21:06:47.997550 | orchestrator | 2025-05-31 21:06:47 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:06:51.043430 | orchestrator | 2025-05-31 21:06:51 | INFO  | Task 564414c9-48ee-472a-8be8-8232cfd43550 is in state STARTED 2025-05-31 21:06:51.043540 | orchestrator | 2025-05-31 21:06:51 | INFO  | Task 0dc408f5-5766-4286-9789-2291b25cddf6 is in state STARTED 2025-05-31 21:06:51.043556 | orchestrator | 2025-05-31 21:06:51 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:06:54.096084 | orchestrator | 2025-05-31 21:06:54 | INFO  | Task 8b1035ba-f6d8-4145-a2c3-c61acd0089bf is in state STARTED 2025-05-31 21:06:54.097747 | orchestrator | 2025-05-31 21:06:54 | INFO  | Task 70dbb8d5-dd20-4f4c-9490-24ba85b18d14 is in state STARTED 2025-05-31 21:06:54.100503 | orchestrator | 2025-05-31 21:06:54 | INFO  | Task 63184a78-8d07-4380-b1ad-fd2c2380ef94 is in state STARTED 2025-05-31 21:06:54.101837 | orchestrator | 2025-05-31 21:06:54 | INFO  | Task 564414c9-48ee-472a-8be8-8232cfd43550 is in state SUCCESS 2025-05-31 21:06:54.103611 | orchestrator | 2025-05-31 21:06:54 | INFO  | Task 0dc408f5-5766-4286-9789-2291b25cddf6 is in state STARTED 2025-05-31 21:06:54.103637 | orchestrator | 2025-05-31 21:06:54 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:06:57.179702 | orchestrator | 2025-05-31 21:06:57 | INFO  | Task 8b1035ba-f6d8-4145-a2c3-c61acd0089bf is in state STARTED 2025-05-31 21:06:57.179795 | orchestrator | 2025-05-31 21:06:57 | INFO  | Task 70dbb8d5-dd20-4f4c-9490-24ba85b18d14 is in state STARTED 2025-05-31 21:06:57.179810 | orchestrator | 2025-05-31 21:06:57 | INFO  | Task 63184a78-8d07-4380-b1ad-fd2c2380ef94 is in state STARTED 2025-05-31 21:06:57.180793 | orchestrator | 2025-05-31 21:06:57 | INFO  | Task 0dc408f5-5766-4286-9789-2291b25cddf6 is in state STARTED 2025-05-31 21:06:57.180849 | orchestrator | 2025-05-31 21:06:57 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:07:00.216836 | orchestrator | 2025-05-31 21:07:00 | INFO  | Task fdb100a2-c263-4307-b911-c35c8d55a0f1 is in state STARTED 2025-05-31 21:07:00.217000 | orchestrator | 2025-05-31 21:07:00 | INFO  | Task 8b1035ba-f6d8-4145-a2c3-c61acd0089bf is in state STARTED 2025-05-31 21:07:00.217014 | orchestrator | 2025-05-31 21:07:00 | INFO  | Task 70dbb8d5-dd20-4f4c-9490-24ba85b18d14 is in state SUCCESS 2025-05-31 21:07:00.217408 | orchestrator | 2025-05-31 21:07:00 | INFO  | Task 63184a78-8d07-4380-b1ad-fd2c2380ef94 is in state STARTED 2025-05-31 21:07:00.217506 | orchestrator | 2025-05-31 21:07:00 | INFO  | Task 2b540658-29be-4d3e-af99-fcd6eb6c289c is in state STARTED 2025-05-31 21:07:00.218332 | orchestrator | 2025-05-31 21:07:00 | INFO  | Task 0dc408f5-5766-4286-9789-2291b25cddf6 is in state STARTED 2025-05-31 21:07:00.218372 | orchestrator | 2025-05-31 21:07:00 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:07:03.263171 | orchestrator | 2025-05-31 21:07:03 | INFO  | Task fdb100a2-c263-4307-b911-c35c8d55a0f1 is in state STARTED 2025-05-31 21:07:03.264206 | orchestrator | 2025-05-31 21:07:03 | INFO  | Task 8b1035ba-f6d8-4145-a2c3-c61acd0089bf is in state STARTED 2025-05-31 21:07:03.265107 | orchestrator | 2025-05-31 21:07:03 | INFO  | Task 63184a78-8d07-4380-b1ad-fd2c2380ef94 is in state STARTED 2025-05-31 21:07:03.266015 | orchestrator | 2025-05-31 21:07:03 | INFO  | Task 2b540658-29be-4d3e-af99-fcd6eb6c289c is in state STARTED 2025-05-31 21:07:03.266958 | orchestrator | 2025-05-31 21:07:03 | INFO  | Task 0dc408f5-5766-4286-9789-2291b25cddf6 is in state STARTED 2025-05-31 21:07:03.267845 | orchestrator | 2025-05-31 21:07:03 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:07:06.332613 | orchestrator | 2025-05-31 21:07:06 | INFO  | Task fdb100a2-c263-4307-b911-c35c8d55a0f1 is in state STARTED 2025-05-31 21:07:06.332720 | orchestrator | 2025-05-31 21:07:06 | INFO  | Task 8b1035ba-f6d8-4145-a2c3-c61acd0089bf is in state STARTED 2025-05-31 21:07:06.334787 | orchestrator | 2025-05-31 21:07:06 | INFO  | Task 63184a78-8d07-4380-b1ad-fd2c2380ef94 is in state STARTED 2025-05-31 21:07:06.335310 | orchestrator | 2025-05-31 21:07:06 | INFO  | Task 2b540658-29be-4d3e-af99-fcd6eb6c289c is in state STARTED 2025-05-31 21:07:06.337559 | orchestrator | 2025-05-31 21:07:06 | INFO  | Task 0dc408f5-5766-4286-9789-2291b25cddf6 is in state STARTED 2025-05-31 21:07:06.337584 | orchestrator | 2025-05-31 21:07:06 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:07:09.377952 | orchestrator | 2025-05-31 21:07:09 | INFO  | Task fdb100a2-c263-4307-b911-c35c8d55a0f1 is in state STARTED 2025-05-31 21:07:09.378277 | orchestrator | 2025-05-31 21:07:09 | INFO  | Task 8b1035ba-f6d8-4145-a2c3-c61acd0089bf is in state STARTED 2025-05-31 21:07:09.379216 | orchestrator | 2025-05-31 21:07:09 | INFO  | Task 63184a78-8d07-4380-b1ad-fd2c2380ef94 is in state STARTED 2025-05-31 21:07:09.380080 | orchestrator | 2025-05-31 21:07:09 | INFO  | Task 2b540658-29be-4d3e-af99-fcd6eb6c289c is in state STARTED 2025-05-31 21:07:09.380595 | orchestrator | 2025-05-31 21:07:09 | INFO  | Task 0dc408f5-5766-4286-9789-2291b25cddf6 is in state STARTED 2025-05-31 21:07:09.380629 | orchestrator | 2025-05-31 21:07:09 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:07:12.413141 | orchestrator | 2025-05-31 21:07:12 | INFO  | Task fdb100a2-c263-4307-b911-c35c8d55a0f1 is in state STARTED 2025-05-31 21:07:12.413714 | orchestrator | 2025-05-31 21:07:12 | INFO  | Task 8b1035ba-f6d8-4145-a2c3-c61acd0089bf is in state STARTED 2025-05-31 21:07:12.415491 | orchestrator | 2025-05-31 21:07:12 | INFO  | Task 63184a78-8d07-4380-b1ad-fd2c2380ef94 is in state STARTED 2025-05-31 21:07:12.416723 | orchestrator | 2025-05-31 21:07:12 | INFO  | Task 2b540658-29be-4d3e-af99-fcd6eb6c289c is in state STARTED 2025-05-31 21:07:12.420151 | orchestrator | 2025-05-31 21:07:12 | INFO  | Task 0dc408f5-5766-4286-9789-2291b25cddf6 is in state STARTED 2025-05-31 21:07:12.420203 | orchestrator | 2025-05-31 21:07:12 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:07:15.474572 | orchestrator | 2025-05-31 21:07:15 | INFO  | Task fdb100a2-c263-4307-b911-c35c8d55a0f1 is in state STARTED 2025-05-31 21:07:15.475360 | orchestrator | 2025-05-31 21:07:15 | INFO  | Task 8b1035ba-f6d8-4145-a2c3-c61acd0089bf is in state STARTED 2025-05-31 21:07:15.478304 | orchestrator | 2025-05-31 21:07:15 | INFO  | Task 63184a78-8d07-4380-b1ad-fd2c2380ef94 is in state STARTED 2025-05-31 21:07:15.479687 | orchestrator | 2025-05-31 21:07:15 | INFO  | Task 2b540658-29be-4d3e-af99-fcd6eb6c289c is in state STARTED 2025-05-31 21:07:15.480732 | orchestrator | 2025-05-31 21:07:15 | INFO  | Task 0dc408f5-5766-4286-9789-2291b25cddf6 is in state STARTED 2025-05-31 21:07:15.480802 | orchestrator | 2025-05-31 21:07:15 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:07:18.516425 | orchestrator | 2025-05-31 21:07:18 | INFO  | Task fdb100a2-c263-4307-b911-c35c8d55a0f1 is in state STARTED 2025-05-31 21:07:18.516946 | orchestrator | 2025-05-31 21:07:18 | INFO  | Task 8b1035ba-f6d8-4145-a2c3-c61acd0089bf is in state STARTED 2025-05-31 21:07:18.518378 | orchestrator | 2025-05-31 21:07:18 | INFO  | Task 63184a78-8d07-4380-b1ad-fd2c2380ef94 is in state STARTED 2025-05-31 21:07:18.520396 | orchestrator | 2025-05-31 21:07:18 | INFO  | Task 2b540658-29be-4d3e-af99-fcd6eb6c289c is in state STARTED 2025-05-31 21:07:18.523422 | orchestrator | 2025-05-31 21:07:18 | INFO  | Task 0dc408f5-5766-4286-9789-2291b25cddf6 is in state STARTED 2025-05-31 21:07:18.523468 | orchestrator | 2025-05-31 21:07:18 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:07:21.559717 | orchestrator | 2025-05-31 21:07:21 | INFO  | Task fdb100a2-c263-4307-b911-c35c8d55a0f1 is in state STARTED 2025-05-31 21:07:21.563965 | orchestrator | 2025-05-31 21:07:21 | INFO  | Task 8b1035ba-f6d8-4145-a2c3-c61acd0089bf is in state STARTED 2025-05-31 21:07:21.564565 | orchestrator | 2025-05-31 21:07:21 | INFO  | Task 63184a78-8d07-4380-b1ad-fd2c2380ef94 is in state STARTED 2025-05-31 21:07:21.568229 | orchestrator | 2025-05-31 21:07:21 | INFO  | Task 2b540658-29be-4d3e-af99-fcd6eb6c289c is in state STARTED 2025-05-31 21:07:21.568278 | orchestrator | 2025-05-31 21:07:21 | INFO  | Task 0dc408f5-5766-4286-9789-2291b25cddf6 is in state SUCCESS 2025-05-31 21:07:21.569388 | orchestrator | 2025-05-31 21:07:21.569429 | orchestrator | 2025-05-31 21:07:21.569442 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-05-31 21:07:21.569454 | orchestrator | 2025-05-31 21:07:21.569465 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-05-31 21:07:21.569476 | orchestrator | Saturday 31 May 2025 21:06:03 +0000 (0:00:00.215) 0:00:00.215 ********** 2025-05-31 21:07:21.569488 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-05-31 21:07:21.569500 | orchestrator | 2025-05-31 21:07:21.569540 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-05-31 21:07:21.569553 | orchestrator | Saturday 31 May 2025 21:06:03 +0000 (0:00:00.195) 0:00:00.411 ********** 2025-05-31 21:07:21.569589 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-05-31 21:07:21.569601 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-05-31 21:07:21.569666 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-05-31 21:07:21.569680 | orchestrator | 2025-05-31 21:07:21.569691 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-05-31 21:07:21.569702 | orchestrator | Saturday 31 May 2025 21:06:05 +0000 (0:00:01.179) 0:00:01.591 ********** 2025-05-31 21:07:21.569713 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-05-31 21:07:21.569724 | orchestrator | 2025-05-31 21:07:21.569763 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-05-31 21:07:21.569775 | orchestrator | Saturday 31 May 2025 21:06:06 +0000 (0:00:01.126) 0:00:02.717 ********** 2025-05-31 21:07:21.569797 | orchestrator | changed: [testbed-manager] 2025-05-31 21:07:21.569808 | orchestrator | 2025-05-31 21:07:21.569819 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-05-31 21:07:21.569921 | orchestrator | Saturday 31 May 2025 21:06:07 +0000 (0:00:00.948) 0:00:03.666 ********** 2025-05-31 21:07:21.569937 | orchestrator | changed: [testbed-manager] 2025-05-31 21:07:21.569948 | orchestrator | 2025-05-31 21:07:21.569959 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-05-31 21:07:21.569972 | orchestrator | Saturday 31 May 2025 21:06:07 +0000 (0:00:00.824) 0:00:04.491 ********** 2025-05-31 21:07:21.569984 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-05-31 21:07:21.569996 | orchestrator | ok: [testbed-manager] 2025-05-31 21:07:21.570009 | orchestrator | 2025-05-31 21:07:21.570084 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-05-31 21:07:21.570097 | orchestrator | Saturday 31 May 2025 21:06:43 +0000 (0:00:35.355) 0:00:39.846 ********** 2025-05-31 21:07:21.570110 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-05-31 21:07:21.570122 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-05-31 21:07:21.570135 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-05-31 21:07:21.570147 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-05-31 21:07:21.570159 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-05-31 21:07:21.570171 | orchestrator | 2025-05-31 21:07:21.570184 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-05-31 21:07:21.570196 | orchestrator | Saturday 31 May 2025 21:06:47 +0000 (0:00:03.963) 0:00:43.810 ********** 2025-05-31 21:07:21.570209 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-05-31 21:07:21.570221 | orchestrator | 2025-05-31 21:07:21.570233 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-05-31 21:07:21.570245 | orchestrator | Saturday 31 May 2025 21:06:47 +0000 (0:00:00.458) 0:00:44.268 ********** 2025-05-31 21:07:21.570257 | orchestrator | skipping: [testbed-manager] 2025-05-31 21:07:21.570269 | orchestrator | 2025-05-31 21:07:21.570280 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-05-31 21:07:21.570291 | orchestrator | Saturday 31 May 2025 21:06:47 +0000 (0:00:00.131) 0:00:44.400 ********** 2025-05-31 21:07:21.570301 | orchestrator | skipping: [testbed-manager] 2025-05-31 21:07:21.570312 | orchestrator | 2025-05-31 21:07:21.570322 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-05-31 21:07:21.570333 | orchestrator | Saturday 31 May 2025 21:06:48 +0000 (0:00:00.292) 0:00:44.692 ********** 2025-05-31 21:07:21.570344 | orchestrator | changed: [testbed-manager] 2025-05-31 21:07:21.570355 | orchestrator | 2025-05-31 21:07:21.570366 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-05-31 21:07:21.570376 | orchestrator | Saturday 31 May 2025 21:06:49 +0000 (0:00:01.435) 0:00:46.129 ********** 2025-05-31 21:07:21.570387 | orchestrator | changed: [testbed-manager] 2025-05-31 21:07:21.570398 | orchestrator | 2025-05-31 21:07:21.570408 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-05-31 21:07:21.570419 | orchestrator | Saturday 31 May 2025 21:06:50 +0000 (0:00:00.850) 0:00:46.979 ********** 2025-05-31 21:07:21.570441 | orchestrator | changed: [testbed-manager] 2025-05-31 21:07:21.570452 | orchestrator | 2025-05-31 21:07:21.570463 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-05-31 21:07:21.570474 | orchestrator | Saturday 31 May 2025 21:06:51 +0000 (0:00:00.533) 0:00:47.513 ********** 2025-05-31 21:07:21.570485 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-05-31 21:07:21.570496 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-05-31 21:07:21.570506 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-05-31 21:07:21.570517 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-05-31 21:07:21.570527 | orchestrator | 2025-05-31 21:07:21.570538 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 21:07:21.570556 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-31 21:07:21.570569 | orchestrator | 2025-05-31 21:07:21.570579 | orchestrator | 2025-05-31 21:07:21.570612 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-31 21:07:21.570624 | orchestrator | Saturday 31 May 2025 21:06:52 +0000 (0:00:01.389) 0:00:48.902 ********** 2025-05-31 21:07:21.570635 | orchestrator | =============================================================================== 2025-05-31 21:07:21.570646 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 35.36s 2025-05-31 21:07:21.570656 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 3.96s 2025-05-31 21:07:21.570667 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.44s 2025-05-31 21:07:21.570678 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.39s 2025-05-31 21:07:21.570688 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.18s 2025-05-31 21:07:21.570699 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.13s 2025-05-31 21:07:21.570710 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.95s 2025-05-31 21:07:21.570720 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.85s 2025-05-31 21:07:21.570731 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.82s 2025-05-31 21:07:21.570742 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.53s 2025-05-31 21:07:21.570752 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.46s 2025-05-31 21:07:21.570763 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.29s 2025-05-31 21:07:21.570774 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.20s 2025-05-31 21:07:21.570784 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.13s 2025-05-31 21:07:21.570795 | orchestrator | 2025-05-31 21:07:21.570806 | orchestrator | 2025-05-31 21:07:21.570816 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-31 21:07:21.570827 | orchestrator | 2025-05-31 21:07:21.570838 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-31 21:07:21.570849 | orchestrator | Saturday 31 May 2025 21:06:56 +0000 (0:00:00.168) 0:00:00.168 ********** 2025-05-31 21:07:21.570886 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:07:21.570898 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:07:21.570909 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:07:21.570920 | orchestrator | 2025-05-31 21:07:21.570930 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-31 21:07:21.570941 | orchestrator | Saturday 31 May 2025 21:06:56 +0000 (0:00:00.281) 0:00:00.449 ********** 2025-05-31 21:07:21.570952 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-05-31 21:07:21.570963 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-05-31 21:07:21.570973 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-05-31 21:07:21.570984 | orchestrator | 2025-05-31 21:07:21.570994 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-05-31 21:07:21.571013 | orchestrator | 2025-05-31 21:07:21.571024 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-05-31 21:07:21.571035 | orchestrator | Saturday 31 May 2025 21:06:57 +0000 (0:00:00.770) 0:00:01.220 ********** 2025-05-31 21:07:21.571045 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:07:21.571056 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:07:21.571067 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:07:21.571077 | orchestrator | 2025-05-31 21:07:21.571088 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 21:07:21.571099 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 21:07:21.571110 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 21:07:21.571121 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 21:07:21.571132 | orchestrator | 2025-05-31 21:07:21.571143 | orchestrator | 2025-05-31 21:07:21.571153 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-31 21:07:21.571164 | orchestrator | Saturday 31 May 2025 21:06:58 +0000 (0:00:00.715) 0:00:01.935 ********** 2025-05-31 21:07:21.571175 | orchestrator | =============================================================================== 2025-05-31 21:07:21.571185 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.77s 2025-05-31 21:07:21.571196 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.72s 2025-05-31 21:07:21.571207 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.28s 2025-05-31 21:07:21.571217 | orchestrator | 2025-05-31 21:07:21.571228 | orchestrator | 2025-05-31 21:07:21.571239 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-31 21:07:21.571249 | orchestrator | 2025-05-31 21:07:21.571260 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-31 21:07:21.571271 | orchestrator | Saturday 31 May 2025 21:04:36 +0000 (0:00:00.187) 0:00:00.187 ********** 2025-05-31 21:07:21.571282 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:07:21.571292 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:07:21.571303 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:07:21.571313 | orchestrator | 2025-05-31 21:07:21.571324 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-31 21:07:21.571335 | orchestrator | Saturday 31 May 2025 21:04:37 +0000 (0:00:00.246) 0:00:00.434 ********** 2025-05-31 21:07:21.571345 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-05-31 21:07:21.571361 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-05-31 21:07:21.571372 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-05-31 21:07:21.571383 | orchestrator | 2025-05-31 21:07:21.571393 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-05-31 21:07:21.571404 | orchestrator | 2025-05-31 21:07:21.571431 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-31 21:07:21.571442 | orchestrator | Saturday 31 May 2025 21:04:37 +0000 (0:00:00.330) 0:00:00.764 ********** 2025-05-31 21:07:21.571453 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 21:07:21.571464 | orchestrator | 2025-05-31 21:07:21.571475 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-05-31 21:07:21.571486 | orchestrator | Saturday 31 May 2025 21:04:37 +0000 (0:00:00.445) 0:00:01.210 ********** 2025-05-31 21:07:21.571503 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-31 21:07:21.571525 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-31 21:07:21.571539 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-31 21:07:21.571556 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-31 21:07:21.571578 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-31 21:07:21.571590 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-31 21:07:21.571608 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-31 21:07:21.571620 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-31 21:07:21.571631 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-31 21:07:21.571643 | orchestrator | 2025-05-31 21:07:21.571654 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-05-31 21:07:21.571665 | orchestrator | Saturday 31 May 2025 21:04:39 +0000 (0:00:01.598) 0:00:02.808 ********** 2025-05-31 21:07:21.571676 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-05-31 21:07:21.571687 | orchestrator | 2025-05-31 21:07:21.571698 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-05-31 21:07:21.571709 | orchestrator | Saturday 31 May 2025 21:04:40 +0000 (0:00:00.753) 0:00:03.562 ********** 2025-05-31 21:07:21.571719 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:07:21.571730 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:07:21.571741 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:07:21.571751 | orchestrator | 2025-05-31 21:07:21.571762 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-05-31 21:07:21.571773 | orchestrator | Saturday 31 May 2025 21:04:40 +0000 (0:00:00.416) 0:00:03.978 ********** 2025-05-31 21:07:21.571783 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-31 21:07:21.571794 | orchestrator | 2025-05-31 21:07:21.571805 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-31 21:07:21.571820 | orchestrator | Saturday 31 May 2025 21:04:41 +0000 (0:00:00.630) 0:00:04.608 ********** 2025-05-31 21:07:21.571832 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 21:07:21.571850 | orchestrator | 2025-05-31 21:07:21.571885 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-05-31 21:07:21.571897 | orchestrator | Saturday 31 May 2025 21:04:41 +0000 (0:00:00.439) 0:00:05.048 ********** 2025-05-31 21:07:21.571909 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-31 21:07:21.571921 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-31 21:07:21.571934 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-31 21:07:21.571946 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-31 21:07:21.571969 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-31 21:07:21.571989 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-31 21:07:21.572000 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-31 21:07:21.572011 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-31 21:07:21.572023 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-31 21:07:21.572034 | orchestrator | 2025-05-31 21:07:21.572045 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-05-31 21:07:21.572056 | orchestrator | Saturday 31 May 2025 21:04:45 +0000 (0:00:03.262) 0:00:08.310 ********** 2025-05-31 21:07:21.572072 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-05-31 21:07:21.572104 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-31 21:07:21.572116 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-31 21:07:21.572127 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:07:21.572139 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-05-31 21:07:21.572151 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-31 21:07:21.572163 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-31 21:07:21.572175 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:07:21.572198 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-05-31 21:07:21.572217 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-31 21:07:21.572228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-31 21:07:21.572240 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:07:21.572251 | orchestrator | 2025-05-31 21:07:21.572262 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-05-31 21:07:21.572273 | orchestrator | Saturday 31 May 2025 21:04:45 +0000 (0:00:00.529) 0:00:08.840 ********** 2025-05-31 21:07:21.572285 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-05-31 21:07:21.572297 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-31 21:07:21.572319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-31 21:07:21.572331 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:07:21.572349 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'grou2025-05-31 21:07:21 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:07:21.572362 | orchestrator | p': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-05-31 21:07:21.572374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-31 21:07:21.572385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-31 21:07:21.572396 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:07:21.572408 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-05-31 21:07:21.572426 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-31 21:07:21.572450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-31 21:07:21.572462 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:07:21.572473 | orchestrator | 2025-05-31 21:07:21.572484 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-05-31 21:07:21.572495 | orchestrator | Saturday 31 May 2025 21:04:46 +0000 (0:00:00.780) 0:00:09.621 ********** 2025-05-31 21:07:21.572507 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-31 21:07:21.572520 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-31 21:07:21.572532 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-31 21:07:21.572561 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-31 21:07:21.572574 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-31 21:07:21.572585 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-31 21:07:21.572597 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-31 21:07:21.572608 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-31 21:07:21.572626 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-31 21:07:21.572637 | orchestrator | 2025-05-31 21:07:21.572648 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-05-31 21:07:21.572659 | orchestrator | Saturday 31 May 2025 21:04:49 +0000 (0:00:03.570) 0:00:13.191 ********** 2025-05-31 21:07:21.572681 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-31 21:07:21.572694 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-31 21:07:21.572706 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-31 21:07:21.572718 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-31 21:07:21.572736 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-31 21:07:21.572758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-31 21:07:21.572770 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-31 21:07:21.572781 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-31 21:07:21.572792 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-31 21:07:21.572803 | orchestrator | 2025-05-31 21:07:21.572814 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-05-31 21:07:21.572826 | orchestrator | Saturday 31 May 2025 21:04:55 +0000 (0:00:05.144) 0:00:18.336 ********** 2025-05-31 21:07:21.572844 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:07:21.572902 | orchestrator | changed: [testbed-node-1] 2025-05-31 21:07:21.572915 | orchestrator | changed: [testbed-node-2] 2025-05-31 21:07:21.572926 | orchestrator | 2025-05-31 21:07:21.572938 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-05-31 21:07:21.572949 | orchestrator | Saturday 31 May 2025 21:04:56 +0000 (0:00:01.427) 0:00:19.763 ********** 2025-05-31 21:07:21.572960 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:07:21.572972 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:07:21.572983 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:07:21.572994 | orchestrator | 2025-05-31 21:07:21.573005 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-05-31 21:07:21.573016 | orchestrator | Saturday 31 May 2025 21:04:57 +0000 (0:00:00.691) 0:00:20.454 ********** 2025-05-31 21:07:21.573027 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:07:21.573038 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:07:21.573049 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:07:21.573060 | orchestrator | 2025-05-31 21:07:21.573071 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-05-31 21:07:21.573083 | orchestrator | Saturday 31 May 2025 21:04:57 +0000 (0:00:00.469) 0:00:20.924 ********** 2025-05-31 21:07:21.573094 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:07:21.573105 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:07:21.573116 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:07:21.573126 | orchestrator | 2025-05-31 21:07:21.573138 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-05-31 21:07:21.573149 | orchestrator | Saturday 31 May 2025 21:04:57 +0000 (0:00:00.286) 0:00:21.211 ********** 2025-05-31 21:07:21.573174 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-31 21:07:21.573187 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-31 21:07:21.573200 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-31 21:07:21.573226 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-31 21:07:21.573239 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-31 21:07:21.573256 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-31 21:07:21.573275 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-31 21:07:21.573287 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-31 21:07:21.573299 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-31 21:07:21.573317 | orchestrator | 2025-05-31 21:07:21.573427 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-31 21:07:21.573438 | orchestrator | Saturday 31 May 2025 21:05:00 +0000 (0:00:02.403) 0:00:23.614 ********** 2025-05-31 21:07:21.573448 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:07:21.573458 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:07:21.573468 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:07:21.573478 | orchestrator | 2025-05-31 21:07:21.573488 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-05-31 21:07:21.573498 | orchestrator | Saturday 31 May 2025 21:05:00 +0000 (0:00:00.280) 0:00:23.895 ********** 2025-05-31 21:07:21.573508 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-05-31 21:07:21.573518 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-05-31 21:07:21.573528 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-05-31 21:07:21.573538 | orchestrator | 2025-05-31 21:07:21.573548 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-05-31 21:07:21.573558 | orchestrator | Saturday 31 May 2025 21:05:02 +0000 (0:00:01.946) 0:00:25.841 ********** 2025-05-31 21:07:21.573568 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-31 21:07:21.573578 | orchestrator | 2025-05-31 21:07:21.573588 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-05-31 21:07:21.573598 | orchestrator | Saturday 31 May 2025 21:05:03 +0000 (0:00:00.880) 0:00:26.721 ********** 2025-05-31 21:07:21.573608 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:07:21.573618 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:07:21.573627 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:07:21.573637 | orchestrator | 2025-05-31 21:07:21.573647 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-05-31 21:07:21.573657 | orchestrator | Saturday 31 May 2025 21:05:03 +0000 (0:00:00.517) 0:00:27.239 ********** 2025-05-31 21:07:21.573667 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-05-31 21:07:21.573677 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-31 21:07:21.573687 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-05-31 21:07:21.573697 | orchestrator | 2025-05-31 21:07:21.573706 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-05-31 21:07:21.573716 | orchestrator | Saturday 31 May 2025 21:05:04 +0000 (0:00:00.994) 0:00:28.233 ********** 2025-05-31 21:07:21.573751 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:07:21.573763 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:07:21.573773 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:07:21.573782 | orchestrator | 2025-05-31 21:07:21.573805 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-05-31 21:07:21.573825 | orchestrator | Saturday 31 May 2025 21:05:05 +0000 (0:00:00.313) 0:00:28.547 ********** 2025-05-31 21:07:21.573835 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-05-31 21:07:21.573845 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-05-31 21:07:21.573872 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-05-31 21:07:21.573889 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-05-31 21:07:21.573913 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-05-31 21:07:21.573923 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-05-31 21:07:21.573933 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-05-31 21:07:21.573942 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-05-31 21:07:21.573952 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-05-31 21:07:21.573961 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-05-31 21:07:21.573971 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-05-31 21:07:21.573980 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-05-31 21:07:21.573990 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-05-31 21:07:21.573999 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-05-31 21:07:21.574009 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-05-31 21:07:21.574044 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-31 21:07:21.574058 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-31 21:07:21.574074 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-31 21:07:21.574112 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-31 21:07:21.574129 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-31 21:07:21.574145 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-31 21:07:21.574162 | orchestrator | 2025-05-31 21:07:21.574172 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-05-31 21:07:21.574181 | orchestrator | Saturday 31 May 2025 21:05:13 +0000 (0:00:08.579) 0:00:37.127 ********** 2025-05-31 21:07:21.574191 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-31 21:07:21.574200 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-31 21:07:21.574209 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-31 21:07:21.574219 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-31 21:07:21.574228 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-31 21:07:21.574238 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-31 21:07:21.574247 | orchestrator | 2025-05-31 21:07:21.574257 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-05-31 21:07:21.574266 | orchestrator | Saturday 31 May 2025 21:05:16 +0000 (0:00:02.553) 0:00:39.681 ********** 2025-05-31 21:07:21.574277 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-31 21:07:21.574306 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-31 21:07:21.574408 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-05-31 21:07:21.574432 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-31 21:07:21.574443 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-31 21:07:21.574454 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-31 21:07:21.574471 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-31 21:07:21.574493 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-31 21:07:21.574504 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-31 21:07:21.574514 | orchestrator | 2025-05-31 21:07:21.574524 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-31 21:07:21.574534 | orchestrator | Saturday 31 May 2025 21:05:18 +0000 (0:00:02.268) 0:00:41.949 ********** 2025-05-31 21:07:21.574544 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:07:21.574554 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:07:21.574563 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:07:21.574573 | orchestrator | 2025-05-31 21:07:21.574582 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-05-31 21:07:21.574592 | orchestrator | Saturday 31 May 2025 21:05:18 +0000 (0:00:00.269) 0:00:42.219 ********** 2025-05-31 21:07:21.574601 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:07:21.574611 | orchestrator | 2025-05-31 21:07:21.574620 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-05-31 21:07:21.574630 | orchestrator | Saturday 31 May 2025 21:05:20 +0000 (0:00:02.057) 0:00:44.276 ********** 2025-05-31 21:07:21.574640 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:07:21.574649 | orchestrator | 2025-05-31 21:07:21.574659 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-05-31 21:07:21.574668 | orchestrator | Saturday 31 May 2025 21:05:23 +0000 (0:00:02.602) 0:00:46.879 ********** 2025-05-31 21:07:21.574678 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:07:21.574687 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:07:21.574697 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:07:21.574707 | orchestrator | 2025-05-31 21:07:21.574716 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-05-31 21:07:21.574726 | orchestrator | Saturday 31 May 2025 21:05:24 +0000 (0:00:00.818) 0:00:47.697 ********** 2025-05-31 21:07:21.574735 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:07:21.574745 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:07:21.574760 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:07:21.574769 | orchestrator | 2025-05-31 21:07:21.574779 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-05-31 21:07:21.574789 | orchestrator | Saturday 31 May 2025 21:05:24 +0000 (0:00:00.329) 0:00:48.027 ********** 2025-05-31 21:07:21.574798 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:07:21.574808 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:07:21.574818 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:07:21.574827 | orchestrator | 2025-05-31 21:07:21.574836 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-05-31 21:07:21.574846 | orchestrator | Saturday 31 May 2025 21:05:25 +0000 (0:00:00.329) 0:00:48.357 ********** 2025-05-31 21:07:21.574873 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:07:21.574883 | orchestrator | 2025-05-31 21:07:21.574892 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-05-31 21:07:21.574902 | orchestrator | Saturday 31 May 2025 21:05:38 +0000 (0:00:12.946) 0:01:01.304 ********** 2025-05-31 21:07:21.574912 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:07:21.574921 | orchestrator | 2025-05-31 21:07:21.574931 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-05-31 21:07:21.574940 | orchestrator | Saturday 31 May 2025 21:05:47 +0000 (0:00:09.115) 0:01:10.419 ********** 2025-05-31 21:07:21.574950 | orchestrator | 2025-05-31 21:07:21.574959 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-05-31 21:07:21.574969 | orchestrator | Saturday 31 May 2025 21:05:47 +0000 (0:00:00.251) 0:01:10.670 ********** 2025-05-31 21:07:21.574978 | orchestrator | 2025-05-31 21:07:21.574988 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-05-31 21:07:21.574997 | orchestrator | Saturday 31 May 2025 21:05:47 +0000 (0:00:00.072) 0:01:10.743 ********** 2025-05-31 21:07:21.575007 | orchestrator | 2025-05-31 21:07:21.575016 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-05-31 21:07:21.575026 | orchestrator | Saturday 31 May 2025 21:05:47 +0000 (0:00:00.064) 0:01:10.807 ********** 2025-05-31 21:07:21.575035 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:07:21.575045 | orchestrator | changed: [testbed-node-1] 2025-05-31 21:07:21.575054 | orchestrator | changed: [testbed-node-2] 2025-05-31 21:07:21.575064 | orchestrator | 2025-05-31 21:07:21.575074 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-05-31 21:07:21.575084 | orchestrator | Saturday 31 May 2025 21:06:15 +0000 (0:00:27.739) 0:01:38.546 ********** 2025-05-31 21:07:21.575093 | orchestrator | changed: [testbed-node-1] 2025-05-31 21:07:21.575103 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:07:21.575117 | orchestrator | changed: [testbed-node-2] 2025-05-31 21:07:21.575126 | orchestrator | 2025-05-31 21:07:21.575136 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-05-31 21:07:21.575151 | orchestrator | Saturday 31 May 2025 21:06:25 +0000 (0:00:10.356) 0:01:48.903 ********** 2025-05-31 21:07:21.575161 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:07:21.575171 | orchestrator | changed: [testbed-node-1] 2025-05-31 21:07:21.575180 | orchestrator | changed: [testbed-node-2] 2025-05-31 21:07:21.575190 | orchestrator | 2025-05-31 21:07:21.575199 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-31 21:07:21.575209 | orchestrator | Saturday 31 May 2025 21:06:37 +0000 (0:00:11.719) 0:02:00.623 ********** 2025-05-31 21:07:21.575219 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 21:07:21.575229 | orchestrator | 2025-05-31 21:07:21.575238 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-05-31 21:07:21.575248 | orchestrator | Saturday 31 May 2025 21:06:38 +0000 (0:00:00.730) 0:02:01.353 ********** 2025-05-31 21:07:21.575258 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:07:21.575267 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:07:21.575277 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:07:21.575292 | orchestrator | 2025-05-31 21:07:21.575302 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-05-31 21:07:21.575312 | orchestrator | Saturday 31 May 2025 21:06:38 +0000 (0:00:00.732) 0:02:02.086 ********** 2025-05-31 21:07:21.575321 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:07:21.575331 | orchestrator | 2025-05-31 21:07:21.575340 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-05-31 21:07:21.575350 | orchestrator | Saturday 31 May 2025 21:06:40 +0000 (0:00:01.700) 0:02:03.787 ********** 2025-05-31 21:07:21.575359 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-05-31 21:07:21.575369 | orchestrator | 2025-05-31 21:07:21.575379 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-05-31 21:07:21.575388 | orchestrator | Saturday 31 May 2025 21:06:49 +0000 (0:00:09.107) 0:02:12.895 ********** 2025-05-31 21:07:21.575398 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-05-31 21:07:21.575407 | orchestrator | 2025-05-31 21:07:21.575417 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-05-31 21:07:21.575426 | orchestrator | Saturday 31 May 2025 21:07:10 +0000 (0:00:20.789) 0:02:33.684 ********** 2025-05-31 21:07:21.575436 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-05-31 21:07:21.575445 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-05-31 21:07:21.575455 | orchestrator | 2025-05-31 21:07:21.575464 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-05-31 21:07:21.575474 | orchestrator | Saturday 31 May 2025 21:07:16 +0000 (0:00:05.732) 0:02:39.417 ********** 2025-05-31 21:07:21.575483 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:07:21.575493 | orchestrator | 2025-05-31 21:07:21.575503 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-05-31 21:07:21.575512 | orchestrator | Saturday 31 May 2025 21:07:16 +0000 (0:00:00.490) 0:02:39.908 ********** 2025-05-31 21:07:21.575522 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:07:21.575531 | orchestrator | 2025-05-31 21:07:21.575541 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-05-31 21:07:21.575551 | orchestrator | Saturday 31 May 2025 21:07:16 +0000 (0:00:00.142) 0:02:40.050 ********** 2025-05-31 21:07:21.575560 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:07:21.575570 | orchestrator | 2025-05-31 21:07:21.575580 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-05-31 21:07:21.575589 | orchestrator | Saturday 31 May 2025 21:07:16 +0000 (0:00:00.127) 0:02:40.177 ********** 2025-05-31 21:07:21.575599 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:07:21.575608 | orchestrator | 2025-05-31 21:07:21.575618 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-05-31 21:07:21.575628 | orchestrator | Saturday 31 May 2025 21:07:17 +0000 (0:00:00.331) 0:02:40.509 ********** 2025-05-31 21:07:21.575637 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:07:21.575647 | orchestrator | 2025-05-31 21:07:21.575656 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-31 21:07:21.575666 | orchestrator | Saturday 31 May 2025 21:07:19 +0000 (0:00:02.739) 0:02:43.248 ********** 2025-05-31 21:07:21.575675 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:07:21.575685 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:07:21.575694 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:07:21.575704 | orchestrator | 2025-05-31 21:07:21.575713 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 21:07:21.575723 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-05-31 21:07:21.575734 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-05-31 21:07:21.575743 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-05-31 21:07:21.575758 | orchestrator | 2025-05-31 21:07:21.575768 | orchestrator | 2025-05-31 21:07:21.575778 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-31 21:07:21.575787 | orchestrator | Saturday 31 May 2025 21:07:20 +0000 (0:00:00.640) 0:02:43.889 ********** 2025-05-31 21:07:21.575797 | orchestrator | =============================================================================== 2025-05-31 21:07:21.575806 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 27.74s 2025-05-31 21:07:21.575821 | orchestrator | service-ks-register : keystone | Creating services --------------------- 20.79s 2025-05-31 21:07:21.575830 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 12.95s 2025-05-31 21:07:21.575845 | orchestrator | keystone : Restart keystone container ---------------------------------- 11.72s 2025-05-31 21:07:21.575867 | orchestrator | keystone : Restart keystone-fernet container --------------------------- 10.36s 2025-05-31 21:07:21.575878 | orchestrator | keystone : Running Keystone fernet bootstrap container ------------------ 9.12s 2025-05-31 21:07:21.575887 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint ---- 9.11s 2025-05-31 21:07:21.575897 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 8.58s 2025-05-31 21:07:21.575906 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 5.74s 2025-05-31 21:07:21.575916 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.14s 2025-05-31 21:07:21.575925 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.57s 2025-05-31 21:07:21.575934 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.26s 2025-05-31 21:07:21.575944 | orchestrator | keystone : Creating default user role ----------------------------------- 2.74s 2025-05-31 21:07:21.575953 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.60s 2025-05-31 21:07:21.575962 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.55s 2025-05-31 21:07:21.575972 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.40s 2025-05-31 21:07:21.575981 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.27s 2025-05-31 21:07:21.575990 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.06s 2025-05-31 21:07:21.576000 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.95s 2025-05-31 21:07:21.576009 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.70s 2025-05-31 21:07:24.601376 | orchestrator | 2025-05-31 21:07:24 | INFO  | Task fdb100a2-c263-4307-b911-c35c8d55a0f1 is in state STARTED 2025-05-31 21:07:24.601468 | orchestrator | 2025-05-31 21:07:24 | INFO  | Task fd270b86-5dd6-42f0-806a-8c7856d3ebeb is in state STARTED 2025-05-31 21:07:24.601483 | orchestrator | 2025-05-31 21:07:24 | INFO  | Task 8b1035ba-f6d8-4145-a2c3-c61acd0089bf is in state STARTED 2025-05-31 21:07:24.602008 | orchestrator | 2025-05-31 21:07:24 | INFO  | Task 63184a78-8d07-4380-b1ad-fd2c2380ef94 is in state STARTED 2025-05-31 21:07:24.602758 | orchestrator | 2025-05-31 21:07:24 | INFO  | Task 2b540658-29be-4d3e-af99-fcd6eb6c289c is in state STARTED 2025-05-31 21:07:24.602782 | orchestrator | 2025-05-31 21:07:24 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:07:27.630234 | orchestrator | 2025-05-31 21:07:27 | INFO  | Task fdb100a2-c263-4307-b911-c35c8d55a0f1 is in state STARTED 2025-05-31 21:07:27.630485 | orchestrator | 2025-05-31 21:07:27 | INFO  | Task fd270b86-5dd6-42f0-806a-8c7856d3ebeb is in state STARTED 2025-05-31 21:07:27.631016 | orchestrator | 2025-05-31 21:07:27 | INFO  | Task 8b1035ba-f6d8-4145-a2c3-c61acd0089bf is in state STARTED 2025-05-31 21:07:27.631441 | orchestrator | 2025-05-31 21:07:27 | INFO  | Task 63184a78-8d07-4380-b1ad-fd2c2380ef94 is in state STARTED 2025-05-31 21:07:27.632213 | orchestrator | 2025-05-31 21:07:27 | INFO  | Task 2b540658-29be-4d3e-af99-fcd6eb6c289c is in state STARTED 2025-05-31 21:07:27.632243 | orchestrator | 2025-05-31 21:07:27 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:07:30.653437 | orchestrator | 2025-05-31 21:07:30 | INFO  | Task fdb100a2-c263-4307-b911-c35c8d55a0f1 is in state STARTED 2025-05-31 21:07:30.653540 | orchestrator | 2025-05-31 21:07:30 | INFO  | Task fd270b86-5dd6-42f0-806a-8c7856d3ebeb is in state STARTED 2025-05-31 21:07:30.653556 | orchestrator | 2025-05-31 21:07:30 | INFO  | Task 8b1035ba-f6d8-4145-a2c3-c61acd0089bf is in state STARTED 2025-05-31 21:07:30.653639 | orchestrator | 2025-05-31 21:07:30 | INFO  | Task 63184a78-8d07-4380-b1ad-fd2c2380ef94 is in state STARTED 2025-05-31 21:07:30.654461 | orchestrator | 2025-05-31 21:07:30 | INFO  | Task 2b540658-29be-4d3e-af99-fcd6eb6c289c is in state STARTED 2025-05-31 21:07:30.654573 | orchestrator | 2025-05-31 21:07:30 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:07:33.680695 | orchestrator | 2025-05-31 21:07:33 | INFO  | Task fdb100a2-c263-4307-b911-c35c8d55a0f1 is in state STARTED 2025-05-31 21:07:33.681061 | orchestrator | 2025-05-31 21:07:33 | INFO  | Task fd270b86-5dd6-42f0-806a-8c7856d3ebeb is in state STARTED 2025-05-31 21:07:33.682205 | orchestrator | 2025-05-31 21:07:33 | INFO  | Task 8b1035ba-f6d8-4145-a2c3-c61acd0089bf is in state STARTED 2025-05-31 21:07:33.682654 | orchestrator | 2025-05-31 21:07:33 | INFO  | Task 63184a78-8d07-4380-b1ad-fd2c2380ef94 is in state STARTED 2025-05-31 21:07:33.685569 | orchestrator | 2025-05-31 21:07:33 | INFO  | Task 2b540658-29be-4d3e-af99-fcd6eb6c289c is in state STARTED 2025-05-31 21:07:33.685645 | orchestrator | 2025-05-31 21:07:33 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:07:36.707989 | orchestrator | 2025-05-31 21:07:36 | INFO  | Task fdb100a2-c263-4307-b911-c35c8d55a0f1 is in state STARTED 2025-05-31 21:07:36.708106 | orchestrator | 2025-05-31 21:07:36 | INFO  | Task fd270b86-5dd6-42f0-806a-8c7856d3ebeb is in state STARTED 2025-05-31 21:07:36.711218 | orchestrator | 2025-05-31 21:07:36 | INFO  | Task 8b1035ba-f6d8-4145-a2c3-c61acd0089bf is in state STARTED 2025-05-31 21:07:36.713262 | orchestrator | 2025-05-31 21:07:36 | INFO  | Task 63184a78-8d07-4380-b1ad-fd2c2380ef94 is in state STARTED 2025-05-31 21:07:36.715660 | orchestrator | 2025-05-31 21:07:36 | INFO  | Task 2b540658-29be-4d3e-af99-fcd6eb6c289c is in state SUCCESS 2025-05-31 21:07:36.715687 | orchestrator | 2025-05-31 21:07:36 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:07:39.750425 | orchestrator | 2025-05-31 21:07:39 | INFO  | Task fdb100a2-c263-4307-b911-c35c8d55a0f1 is in state STARTED 2025-05-31 21:07:39.753411 | orchestrator | 2025-05-31 21:07:39 | INFO  | Task fd270b86-5dd6-42f0-806a-8c7856d3ebeb is in state STARTED 2025-05-31 21:07:39.755520 | orchestrator | 2025-05-31 21:07:39 | INFO  | Task 8b1035ba-f6d8-4145-a2c3-c61acd0089bf is in state STARTED 2025-05-31 21:07:39.758913 | orchestrator | 2025-05-31 21:07:39 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:07:39.760957 | orchestrator | 2025-05-31 21:07:39 | INFO  | Task 63184a78-8d07-4380-b1ad-fd2c2380ef94 is in state STARTED 2025-05-31 21:07:39.761028 | orchestrator | 2025-05-31 21:07:39 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:07:42.787898 | orchestrator | 2025-05-31 21:07:42 | INFO  | Task fdb100a2-c263-4307-b911-c35c8d55a0f1 is in state STARTED 2025-05-31 21:07:42.790698 | orchestrator | 2025-05-31 21:07:42 | INFO  | Task fd270b86-5dd6-42f0-806a-8c7856d3ebeb is in state STARTED 2025-05-31 21:07:42.791801 | orchestrator | 2025-05-31 21:07:42 | INFO  | Task 8b1035ba-f6d8-4145-a2c3-c61acd0089bf is in state STARTED 2025-05-31 21:07:42.793427 | orchestrator | 2025-05-31 21:07:42 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:07:42.794356 | orchestrator | 2025-05-31 21:07:42 | INFO  | Task 63184a78-8d07-4380-b1ad-fd2c2380ef94 is in state STARTED 2025-05-31 21:07:42.794627 | orchestrator | 2025-05-31 21:07:42 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:07:45.834906 | orchestrator | 2025-05-31 21:07:45 | INFO  | Task fdb100a2-c263-4307-b911-c35c8d55a0f1 is in state STARTED 2025-05-31 21:07:45.836264 | orchestrator | 2025-05-31 21:07:45 | INFO  | Task fd270b86-5dd6-42f0-806a-8c7856d3ebeb is in state STARTED 2025-05-31 21:07:45.837162 | orchestrator | 2025-05-31 21:07:45 | INFO  | Task 8b1035ba-f6d8-4145-a2c3-c61acd0089bf is in state STARTED 2025-05-31 21:07:45.839338 | orchestrator | 2025-05-31 21:07:45 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:07:45.839792 | orchestrator | 2025-05-31 21:07:45 | INFO  | Task 63184a78-8d07-4380-b1ad-fd2c2380ef94 is in state STARTED 2025-05-31 21:07:45.839819 | orchestrator | 2025-05-31 21:07:45 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:07:48.878840 | orchestrator | 2025-05-31 21:07:48 | INFO  | Task fdb100a2-c263-4307-b911-c35c8d55a0f1 is in state STARTED 2025-05-31 21:07:48.878975 | orchestrator | 2025-05-31 21:07:48 | INFO  | Task fd270b86-5dd6-42f0-806a-8c7856d3ebeb is in state STARTED 2025-05-31 21:07:48.879543 | orchestrator | 2025-05-31 21:07:48 | INFO  | Task 8b1035ba-f6d8-4145-a2c3-c61acd0089bf is in state STARTED 2025-05-31 21:07:48.879992 | orchestrator | 2025-05-31 21:07:48 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:07:48.880692 | orchestrator | 2025-05-31 21:07:48 | INFO  | Task 63184a78-8d07-4380-b1ad-fd2c2380ef94 is in state STARTED 2025-05-31 21:07:48.880713 | orchestrator | 2025-05-31 21:07:48 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:07:51.904511 | orchestrator | 2025-05-31 21:07:51 | INFO  | Task fdb100a2-c263-4307-b911-c35c8d55a0f1 is in state STARTED 2025-05-31 21:07:51.904625 | orchestrator | 2025-05-31 21:07:51 | INFO  | Task fd270b86-5dd6-42f0-806a-8c7856d3ebeb is in state STARTED 2025-05-31 21:07:51.904939 | orchestrator | 2025-05-31 21:07:51 | INFO  | Task 8b1035ba-f6d8-4145-a2c3-c61acd0089bf is in state STARTED 2025-05-31 21:07:51.905446 | orchestrator | 2025-05-31 21:07:51 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:07:51.905983 | orchestrator | 2025-05-31 21:07:51 | INFO  | Task 63184a78-8d07-4380-b1ad-fd2c2380ef94 is in state STARTED 2025-05-31 21:07:51.906004 | orchestrator | 2025-05-31 21:07:51 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:07:54.933527 | orchestrator | 2025-05-31 21:07:54 | INFO  | Task fdb100a2-c263-4307-b911-c35c8d55a0f1 is in state STARTED 2025-05-31 21:07:54.933767 | orchestrator | 2025-05-31 21:07:54 | INFO  | Task fd270b86-5dd6-42f0-806a-8c7856d3ebeb is in state STARTED 2025-05-31 21:07:54.934365 | orchestrator | 2025-05-31 21:07:54 | INFO  | Task 8b1035ba-f6d8-4145-a2c3-c61acd0089bf is in state STARTED 2025-05-31 21:07:54.935523 | orchestrator | 2025-05-31 21:07:54 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:07:54.937445 | orchestrator | 2025-05-31 21:07:54 | INFO  | Task 63184a78-8d07-4380-b1ad-fd2c2380ef94 is in state STARTED 2025-05-31 21:07:54.937480 | orchestrator | 2025-05-31 21:07:54 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:07:57.971745 | orchestrator | 2025-05-31 21:07:57 | INFO  | Task fdb100a2-c263-4307-b911-c35c8d55a0f1 is in state STARTED 2025-05-31 21:07:57.971834 | orchestrator | 2025-05-31 21:07:57 | INFO  | Task fd270b86-5dd6-42f0-806a-8c7856d3ebeb is in state STARTED 2025-05-31 21:07:57.973983 | orchestrator | 2025-05-31 21:07:57 | INFO  | Task 8b1035ba-f6d8-4145-a2c3-c61acd0089bf is in state STARTED 2025-05-31 21:07:57.974240 | orchestrator | 2025-05-31 21:07:57 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:07:57.974935 | orchestrator | 2025-05-31 21:07:57 | INFO  | Task 63184a78-8d07-4380-b1ad-fd2c2380ef94 is in state STARTED 2025-05-31 21:07:57.974959 | orchestrator | 2025-05-31 21:07:57 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:08:01.015256 | orchestrator | 2025-05-31 21:08:01 | INFO  | Task fdb100a2-c263-4307-b911-c35c8d55a0f1 is in state STARTED 2025-05-31 21:08:01.020947 | orchestrator | 2025-05-31 21:08:01 | INFO  | Task fd270b86-5dd6-42f0-806a-8c7856d3ebeb is in state STARTED 2025-05-31 21:08:01.024565 | orchestrator | 2025-05-31 21:08:01 | INFO  | Task 8b1035ba-f6d8-4145-a2c3-c61acd0089bf is in state STARTED 2025-05-31 21:08:01.025516 | orchestrator | 2025-05-31 21:08:01 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:08:01.027358 | orchestrator | 2025-05-31 21:08:01 | INFO  | Task 63184a78-8d07-4380-b1ad-fd2c2380ef94 is in state STARTED 2025-05-31 21:08:01.027854 | orchestrator | 2025-05-31 21:08:01 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:08:04.050820 | orchestrator | 2025-05-31 21:08:04 | INFO  | Task fdb100a2-c263-4307-b911-c35c8d55a0f1 is in state STARTED 2025-05-31 21:08:04.051172 | orchestrator | 2025-05-31 21:08:04 | INFO  | Task fd270b86-5dd6-42f0-806a-8c7856d3ebeb is in state STARTED 2025-05-31 21:08:04.051978 | orchestrator | 2025-05-31 21:08:04 | INFO  | Task 8b1035ba-f6d8-4145-a2c3-c61acd0089bf is in state STARTED 2025-05-31 21:08:04.052457 | orchestrator | 2025-05-31 21:08:04 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:08:04.053450 | orchestrator | 2025-05-31 21:08:04 | INFO  | Task 63184a78-8d07-4380-b1ad-fd2c2380ef94 is in state STARTED 2025-05-31 21:08:04.053482 | orchestrator | 2025-05-31 21:08:04 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:08:07.076165 | orchestrator | 2025-05-31 21:08:07 | INFO  | Task fdb100a2-c263-4307-b911-c35c8d55a0f1 is in state STARTED 2025-05-31 21:08:07.076350 | orchestrator | 2025-05-31 21:08:07 | INFO  | Task fd270b86-5dd6-42f0-806a-8c7856d3ebeb is in state STARTED 2025-05-31 21:08:07.077071 | orchestrator | 2025-05-31 21:08:07 | INFO  | Task 8b1035ba-f6d8-4145-a2c3-c61acd0089bf is in state STARTED 2025-05-31 21:08:07.077481 | orchestrator | 2025-05-31 21:08:07 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:08:07.078267 | orchestrator | 2025-05-31 21:08:07 | INFO  | Task 63184a78-8d07-4380-b1ad-fd2c2380ef94 is in state STARTED 2025-05-31 21:08:07.078303 | orchestrator | 2025-05-31 21:08:07 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:08:10.102227 | orchestrator | 2025-05-31 21:08:10 | INFO  | Task fdb100a2-c263-4307-b911-c35c8d55a0f1 is in state STARTED 2025-05-31 21:08:10.102898 | orchestrator | 2025-05-31 21:08:10 | INFO  | Task fd270b86-5dd6-42f0-806a-8c7856d3ebeb is in state STARTED 2025-05-31 21:08:10.103478 | orchestrator | 2025-05-31 21:08:10 | INFO  | Task 8b1035ba-f6d8-4145-a2c3-c61acd0089bf is in state STARTED 2025-05-31 21:08:10.105056 | orchestrator | 2025-05-31 21:08:10 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:08:10.105391 | orchestrator | 2025-05-31 21:08:10 | INFO  | Task 63184a78-8d07-4380-b1ad-fd2c2380ef94 is in state STARTED 2025-05-31 21:08:10.105540 | orchestrator | 2025-05-31 21:08:10 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:08:13.142926 | orchestrator | 2025-05-31 21:08:13 | INFO  | Task fdb100a2-c263-4307-b911-c35c8d55a0f1 is in state STARTED 2025-05-31 21:08:13.146610 | orchestrator | 2025-05-31 21:08:13 | INFO  | Task fd270b86-5dd6-42f0-806a-8c7856d3ebeb is in state STARTED 2025-05-31 21:08:13.146929 | orchestrator | 2025-05-31 21:08:13 | INFO  | Task 8b1035ba-f6d8-4145-a2c3-c61acd0089bf is in state STARTED 2025-05-31 21:08:13.147586 | orchestrator | 2025-05-31 21:08:13 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:08:13.149294 | orchestrator | 2025-05-31 21:08:13 | INFO  | Task 63184a78-8d07-4380-b1ad-fd2c2380ef94 is in state STARTED 2025-05-31 21:08:13.151905 | orchestrator | 2025-05-31 21:08:13 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:08:16.175227 | orchestrator | 2025-05-31 21:08:16 | INFO  | Task fdb100a2-c263-4307-b911-c35c8d55a0f1 is in state STARTED 2025-05-31 21:08:16.175649 | orchestrator | 2025-05-31 21:08:16 | INFO  | Task fd270b86-5dd6-42f0-806a-8c7856d3ebeb is in state STARTED 2025-05-31 21:08:16.178305 | orchestrator | 2025-05-31 21:08:16 | INFO  | Task 8b1035ba-f6d8-4145-a2c3-c61acd0089bf is in state STARTED 2025-05-31 21:08:16.178650 | orchestrator | 2025-05-31 21:08:16 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:08:16.179162 | orchestrator | 2025-05-31 21:08:16 | INFO  | Task 63184a78-8d07-4380-b1ad-fd2c2380ef94 is in state SUCCESS 2025-05-31 21:08:16.179782 | orchestrator | 2025-05-31 21:08:16.179811 | orchestrator | 2025-05-31 21:08:16.179824 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-31 21:08:16.179837 | orchestrator | 2025-05-31 21:08:16.179848 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-31 21:08:16.179887 | orchestrator | Saturday 31 May 2025 21:07:04 +0000 (0:00:00.307) 0:00:00.307 ********** 2025-05-31 21:08:16.179908 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:08:16.179924 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:08:16.179935 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:08:16.179945 | orchestrator | ok: [testbed-manager] 2025-05-31 21:08:16.179956 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:08:16.179967 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:08:16.179977 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:08:16.179988 | orchestrator | 2025-05-31 21:08:16.179999 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-31 21:08:16.180010 | orchestrator | Saturday 31 May 2025 21:07:05 +0000 (0:00:01.420) 0:00:01.727 ********** 2025-05-31 21:08:16.180020 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-05-31 21:08:16.180032 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-05-31 21:08:16.180042 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-05-31 21:08:16.180054 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-05-31 21:08:16.180084 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-05-31 21:08:16.180096 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-05-31 21:08:16.180107 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-05-31 21:08:16.180118 | orchestrator | 2025-05-31 21:08:16.180129 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-05-31 21:08:16.180140 | orchestrator | 2025-05-31 21:08:16.180151 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-05-31 21:08:16.180162 | orchestrator | Saturday 31 May 2025 21:07:07 +0000 (0:00:01.532) 0:00:03.261 ********** 2025-05-31 21:08:16.180173 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 21:08:16.180208 | orchestrator | 2025-05-31 21:08:16.180220 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-05-31 21:08:16.180231 | orchestrator | Saturday 31 May 2025 21:07:08 +0000 (0:00:01.392) 0:00:04.653 ********** 2025-05-31 21:08:16.180242 | orchestrator | changed: [testbed-node-0] => (item=swift (object-store)) 2025-05-31 21:08:16.180253 | orchestrator | 2025-05-31 21:08:16.180264 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-05-31 21:08:16.180274 | orchestrator | Saturday 31 May 2025 21:07:12 +0000 (0:00:03.745) 0:00:08.399 ********** 2025-05-31 21:08:16.180285 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-05-31 21:08:16.180298 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-05-31 21:08:16.180309 | orchestrator | 2025-05-31 21:08:16.180319 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-05-31 21:08:16.180330 | orchestrator | Saturday 31 May 2025 21:07:18 +0000 (0:00:05.933) 0:00:14.332 ********** 2025-05-31 21:08:16.180340 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-31 21:08:16.180351 | orchestrator | 2025-05-31 21:08:16.180362 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-05-31 21:08:16.180384 | orchestrator | Saturday 31 May 2025 21:07:21 +0000 (0:00:02.585) 0:00:16.917 ********** 2025-05-31 21:08:16.180397 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-31 21:08:16.180409 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service) 2025-05-31 21:08:16.180421 | orchestrator | 2025-05-31 21:08:16.180435 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-05-31 21:08:16.180448 | orchestrator | Saturday 31 May 2025 21:07:24 +0000 (0:00:03.518) 0:00:20.436 ********** 2025-05-31 21:08:16.180461 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-31 21:08:16.180474 | orchestrator | changed: [testbed-node-0] => (item=ResellerAdmin) 2025-05-31 21:08:16.180486 | orchestrator | 2025-05-31 21:08:16.180499 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-05-31 21:08:16.180510 | orchestrator | Saturday 31 May 2025 21:07:31 +0000 (0:00:06.454) 0:00:26.890 ********** 2025-05-31 21:08:16.180520 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service -> admin) 2025-05-31 21:08:16.180530 | orchestrator | 2025-05-31 21:08:16.180541 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 21:08:16.180552 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 21:08:16.180563 | orchestrator | testbed-node-0 : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 21:08:16.180574 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 21:08:16.180584 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 21:08:16.180595 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 21:08:16.180619 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 21:08:16.180630 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 21:08:16.180641 | orchestrator | 2025-05-31 21:08:16.180653 | orchestrator | 2025-05-31 21:08:16.180663 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-31 21:08:16.180681 | orchestrator | Saturday 31 May 2025 21:07:36 +0000 (0:00:05.038) 0:00:31.929 ********** 2025-05-31 21:08:16.180692 | orchestrator | =============================================================================== 2025-05-31 21:08:16.180703 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.45s 2025-05-31 21:08:16.180714 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 5.93s 2025-05-31 21:08:16.180724 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 5.04s 2025-05-31 21:08:16.180735 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.75s 2025-05-31 21:08:16.180746 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.52s 2025-05-31 21:08:16.180756 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 2.59s 2025-05-31 21:08:16.180767 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.53s 2025-05-31 21:08:16.180778 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.42s 2025-05-31 21:08:16.180788 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.39s 2025-05-31 21:08:16.180799 | orchestrator | 2025-05-31 21:08:16.180810 | orchestrator | 2025-05-31 21:08:16.180820 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-05-31 21:08:16.180831 | orchestrator | 2025-05-31 21:08:16.180842 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-05-31 21:08:16.180853 | orchestrator | Saturday 31 May 2025 21:06:56 +0000 (0:00:00.261) 0:00:00.261 ********** 2025-05-31 21:08:16.180881 | orchestrator | changed: [testbed-manager] 2025-05-31 21:08:16.180892 | orchestrator | 2025-05-31 21:08:16.180902 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-05-31 21:08:16.180913 | orchestrator | Saturday 31 May 2025 21:06:59 +0000 (0:00:02.593) 0:00:02.855 ********** 2025-05-31 21:08:16.180924 | orchestrator | changed: [testbed-manager] 2025-05-31 21:08:16.180934 | orchestrator | 2025-05-31 21:08:16.180945 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-05-31 21:08:16.180956 | orchestrator | Saturday 31 May 2025 21:07:00 +0000 (0:00:01.061) 0:00:03.917 ********** 2025-05-31 21:08:16.180966 | orchestrator | changed: [testbed-manager] 2025-05-31 21:08:16.180977 | orchestrator | 2025-05-31 21:08:16.180988 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-05-31 21:08:16.180999 | orchestrator | Saturday 31 May 2025 21:07:01 +0000 (0:00:01.170) 0:00:05.087 ********** 2025-05-31 21:08:16.181009 | orchestrator | changed: [testbed-manager] 2025-05-31 21:08:16.181020 | orchestrator | 2025-05-31 21:08:16.181031 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-05-31 21:08:16.181042 | orchestrator | Saturday 31 May 2025 21:07:02 +0000 (0:00:01.088) 0:00:06.176 ********** 2025-05-31 21:08:16.181052 | orchestrator | changed: [testbed-manager] 2025-05-31 21:08:16.181063 | orchestrator | 2025-05-31 21:08:16.181074 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-05-31 21:08:16.181085 | orchestrator | Saturday 31 May 2025 21:07:04 +0000 (0:00:01.169) 0:00:07.346 ********** 2025-05-31 21:08:16.181095 | orchestrator | changed: [testbed-manager] 2025-05-31 21:08:16.181106 | orchestrator | 2025-05-31 21:08:16.181117 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-05-31 21:08:16.181133 | orchestrator | Saturday 31 May 2025 21:07:05 +0000 (0:00:01.022) 0:00:08.368 ********** 2025-05-31 21:08:16.181144 | orchestrator | changed: [testbed-manager] 2025-05-31 21:08:16.181155 | orchestrator | 2025-05-31 21:08:16.181166 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-05-31 21:08:16.181177 | orchestrator | Saturday 31 May 2025 21:07:07 +0000 (0:00:02.114) 0:00:10.483 ********** 2025-05-31 21:08:16.181187 | orchestrator | changed: [testbed-manager] 2025-05-31 21:08:16.181198 | orchestrator | 2025-05-31 21:08:16.181209 | orchestrator | TASK [Create admin user] ******************************************************* 2025-05-31 21:08:16.181220 | orchestrator | Saturday 31 May 2025 21:07:08 +0000 (0:00:00.951) 0:00:11.434 ********** 2025-05-31 21:08:16.181237 | orchestrator | changed: [testbed-manager] 2025-05-31 21:08:16.181248 | orchestrator | 2025-05-31 21:08:16.181259 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-05-31 21:08:16.181270 | orchestrator | Saturday 31 May 2025 21:07:50 +0000 (0:00:42.482) 0:00:53.917 ********** 2025-05-31 21:08:16.181280 | orchestrator | skipping: [testbed-manager] 2025-05-31 21:08:16.181291 | orchestrator | 2025-05-31 21:08:16.181302 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-05-31 21:08:16.181312 | orchestrator | 2025-05-31 21:08:16.181323 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-05-31 21:08:16.181334 | orchestrator | Saturday 31 May 2025 21:07:50 +0000 (0:00:00.159) 0:00:54.076 ********** 2025-05-31 21:08:16.181344 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:08:16.181355 | orchestrator | 2025-05-31 21:08:16.181365 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-05-31 21:08:16.181376 | orchestrator | 2025-05-31 21:08:16.181387 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-05-31 21:08:16.181397 | orchestrator | Saturday 31 May 2025 21:07:52 +0000 (0:00:01.400) 0:00:55.477 ********** 2025-05-31 21:08:16.181408 | orchestrator | changed: [testbed-node-1] 2025-05-31 21:08:16.181419 | orchestrator | 2025-05-31 21:08:16.181429 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-05-31 21:08:16.181440 | orchestrator | 2025-05-31 21:08:16.181451 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-05-31 21:08:16.181461 | orchestrator | Saturday 31 May 2025 21:08:03 +0000 (0:00:11.189) 0:01:06.666 ********** 2025-05-31 21:08:16.181472 | orchestrator | changed: [testbed-node-2] 2025-05-31 21:08:16.181483 | orchestrator | 2025-05-31 21:08:16.181500 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 21:08:16.181511 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-31 21:08:16.181522 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 21:08:16.181533 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 21:08:16.181544 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-31 21:08:16.181555 | orchestrator | 2025-05-31 21:08:16.181565 | orchestrator | 2025-05-31 21:08:16.181576 | orchestrator | 2025-05-31 21:08:16.181587 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-31 21:08:16.181598 | orchestrator | Saturday 31 May 2025 21:08:14 +0000 (0:00:11.089) 0:01:17.756 ********** 2025-05-31 21:08:16.181608 | orchestrator | =============================================================================== 2025-05-31 21:08:16.181619 | orchestrator | Create admin user ------------------------------------------------------ 42.48s 2025-05-31 21:08:16.181629 | orchestrator | Restart ceph manager service ------------------------------------------- 23.68s 2025-05-31 21:08:16.181640 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 2.59s 2025-05-31 21:08:16.181650 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.11s 2025-05-31 21:08:16.181661 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.17s 2025-05-31 21:08:16.181671 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.17s 2025-05-31 21:08:16.181682 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.09s 2025-05-31 21:08:16.181692 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.06s 2025-05-31 21:08:16.181703 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.02s 2025-05-31 21:08:16.181719 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 0.95s 2025-05-31 21:08:16.181730 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.16s 2025-05-31 21:08:16.181740 | orchestrator | 2025-05-31 21:08:16 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:08:19.215020 | orchestrator | 2025-05-31 21:08:19 | INFO  | Task fdb100a2-c263-4307-b911-c35c8d55a0f1 is in state STARTED 2025-05-31 21:08:19.215249 | orchestrator | 2025-05-31 21:08:19 | INFO  | Task fd270b86-5dd6-42f0-806a-8c7856d3ebeb is in state STARTED 2025-05-31 21:08:19.218920 | orchestrator | 2025-05-31 21:08:19 | INFO  | Task 8b1035ba-f6d8-4145-a2c3-c61acd0089bf is in state STARTED 2025-05-31 21:08:19.218969 | orchestrator | 2025-05-31 21:08:19 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:08:19.218982 | orchestrator | 2025-05-31 21:08:19 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:08:22.245481 | orchestrator | 2025-05-31 21:08:22 | INFO  | Task fdb100a2-c263-4307-b911-c35c8d55a0f1 is in state STARTED 2025-05-31 21:08:22.245702 | orchestrator | 2025-05-31 21:08:22 | INFO  | Task fd270b86-5dd6-42f0-806a-8c7856d3ebeb is in state STARTED 2025-05-31 21:08:22.246348 | orchestrator | 2025-05-31 21:08:22 | INFO  | Task 8b1035ba-f6d8-4145-a2c3-c61acd0089bf is in state STARTED 2025-05-31 21:08:22.247041 | orchestrator | 2025-05-31 21:08:22 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:08:22.247132 | orchestrator | 2025-05-31 21:08:22 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:08:25.276912 | orchestrator | 2025-05-31 21:08:25 | INFO  | Task fdb100a2-c263-4307-b911-c35c8d55a0f1 is in state STARTED 2025-05-31 21:08:25.277249 | orchestrator | 2025-05-31 21:08:25 | INFO  | Task fd270b86-5dd6-42f0-806a-8c7856d3ebeb is in state STARTED 2025-05-31 21:08:25.278532 | orchestrator | 2025-05-31 21:08:25 | INFO  | Task 8b1035ba-f6d8-4145-a2c3-c61acd0089bf is in state STARTED 2025-05-31 21:08:25.279465 | orchestrator | 2025-05-31 21:08:25 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:08:25.279561 | orchestrator | 2025-05-31 21:08:25 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:08:28.320063 | orchestrator | 2025-05-31 21:08:28 | INFO  | Task fdb100a2-c263-4307-b911-c35c8d55a0f1 is in state STARTED 2025-05-31 21:08:28.320940 | orchestrator | 2025-05-31 21:08:28 | INFO  | Task fd270b86-5dd6-42f0-806a-8c7856d3ebeb is in state STARTED 2025-05-31 21:08:28.321514 | orchestrator | 2025-05-31 21:08:28 | INFO  | Task 8b1035ba-f6d8-4145-a2c3-c61acd0089bf is in state STARTED 2025-05-31 21:08:28.323007 | orchestrator | 2025-05-31 21:08:28 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:08:28.323095 | orchestrator | 2025-05-31 21:08:28 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:08:31.355614 | orchestrator | 2025-05-31 21:08:31 | INFO  | Task fdb100a2-c263-4307-b911-c35c8d55a0f1 is in state STARTED 2025-05-31 21:08:31.355796 | orchestrator | 2025-05-31 21:08:31 | INFO  | Task fd270b86-5dd6-42f0-806a-8c7856d3ebeb is in state STARTED 2025-05-31 21:08:31.356112 | orchestrator | 2025-05-31 21:08:31 | INFO  | Task 8b1035ba-f6d8-4145-a2c3-c61acd0089bf is in state STARTED 2025-05-31 21:08:31.356619 | orchestrator | 2025-05-31 21:08:31 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:08:31.357078 | orchestrator | 2025-05-31 21:08:31 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:08:34.387963 | orchestrator | 2025-05-31 21:08:34 | INFO  | Task fdb100a2-c263-4307-b911-c35c8d55a0f1 is in state STARTED 2025-05-31 21:08:34.388100 | orchestrator | 2025-05-31 21:08:34 | INFO  | Task fd270b86-5dd6-42f0-806a-8c7856d3ebeb is in state STARTED 2025-05-31 21:08:34.388721 | orchestrator | 2025-05-31 21:08:34 | INFO  | Task 8b1035ba-f6d8-4145-a2c3-c61acd0089bf is in state STARTED 2025-05-31 21:08:34.389436 | orchestrator | 2025-05-31 21:08:34 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:08:34.389468 | orchestrator | 2025-05-31 21:08:34 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:08:37.427123 | orchestrator | 2025-05-31 21:08:37 | INFO  | Task fdb100a2-c263-4307-b911-c35c8d55a0f1 is in state STARTED 2025-05-31 21:08:37.427315 | orchestrator | 2025-05-31 21:08:37 | INFO  | Task fd270b86-5dd6-42f0-806a-8c7856d3ebeb is in state STARTED 2025-05-31 21:08:37.427343 | orchestrator | 2025-05-31 21:08:37 | INFO  | Task 8b1035ba-f6d8-4145-a2c3-c61acd0089bf is in state STARTED 2025-05-31 21:08:37.427476 | orchestrator | 2025-05-31 21:08:37 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:08:37.427497 | orchestrator | 2025-05-31 21:08:37 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:08:40.462676 | orchestrator | 2025-05-31 21:08:40 | INFO  | Task fdb100a2-c263-4307-b911-c35c8d55a0f1 is in state STARTED 2025-05-31 21:08:40.462768 | orchestrator | 2025-05-31 21:08:40 | INFO  | Task fd270b86-5dd6-42f0-806a-8c7856d3ebeb is in state STARTED 2025-05-31 21:08:40.463116 | orchestrator | 2025-05-31 21:08:40 | INFO  | Task 8b1035ba-f6d8-4145-a2c3-c61acd0089bf is in state STARTED 2025-05-31 21:08:40.463962 | orchestrator | 2025-05-31 21:08:40 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:08:40.463991 | orchestrator | 2025-05-31 21:08:40 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:08:43.502856 | orchestrator | 2025-05-31 21:08:43 | INFO  | Task fdb100a2-c263-4307-b911-c35c8d55a0f1 is in state STARTED 2025-05-31 21:08:43.503016 | orchestrator | 2025-05-31 21:08:43 | INFO  | Task fd270b86-5dd6-42f0-806a-8c7856d3ebeb is in state STARTED 2025-05-31 21:08:43.504333 | orchestrator | 2025-05-31 21:08:43 | INFO  | Task 8b1035ba-f6d8-4145-a2c3-c61acd0089bf is in state STARTED 2025-05-31 21:08:43.507284 | orchestrator | 2025-05-31 21:08:43 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:08:43.507356 | orchestrator | 2025-05-31 21:08:43 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:08:46.541045 | orchestrator | 2025-05-31 21:08:46 | INFO  | Task fdb100a2-c263-4307-b911-c35c8d55a0f1 is in state STARTED 2025-05-31 21:08:46.541519 | orchestrator | 2025-05-31 21:08:46 | INFO  | Task fd270b86-5dd6-42f0-806a-8c7856d3ebeb is in state STARTED 2025-05-31 21:08:46.541969 | orchestrator | 2025-05-31 21:08:46 | INFO  | Task 8b1035ba-f6d8-4145-a2c3-c61acd0089bf is in state STARTED 2025-05-31 21:08:46.542841 | orchestrator | 2025-05-31 21:08:46 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:08:46.543452 | orchestrator | 2025-05-31 21:08:46 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:08:49.582352 | orchestrator | 2025-05-31 21:08:49 | INFO  | Task fdb100a2-c263-4307-b911-c35c8d55a0f1 is in state STARTED 2025-05-31 21:08:49.584229 | orchestrator | 2025-05-31 21:08:49 | INFO  | Task fd270b86-5dd6-42f0-806a-8c7856d3ebeb is in state STARTED 2025-05-31 21:08:49.587506 | orchestrator | 2025-05-31 21:08:49 | INFO  | Task 8b1035ba-f6d8-4145-a2c3-c61acd0089bf is in state STARTED 2025-05-31 21:08:49.589343 | orchestrator | 2025-05-31 21:08:49 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:08:49.589393 | orchestrator | 2025-05-31 21:08:49 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:08:52.639162 | orchestrator | 2025-05-31 21:08:52 | INFO  | Task fdb100a2-c263-4307-b911-c35c8d55a0f1 is in state STARTED 2025-05-31 21:08:52.639272 | orchestrator | 2025-05-31 21:08:52 | INFO  | Task fd270b86-5dd6-42f0-806a-8c7856d3ebeb is in state STARTED 2025-05-31 21:08:52.640020 | orchestrator | 2025-05-31 21:08:52 | INFO  | Task 8b1035ba-f6d8-4145-a2c3-c61acd0089bf is in state STARTED 2025-05-31 21:08:52.641187 | orchestrator | 2025-05-31 21:08:52 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:08:52.641354 | orchestrator | 2025-05-31 21:08:52 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:08:55.692221 | orchestrator | 2025-05-31 21:08:55 | INFO  | Task fdb100a2-c263-4307-b911-c35c8d55a0f1 is in state STARTED 2025-05-31 21:08:55.693767 | orchestrator | 2025-05-31 21:08:55 | INFO  | Task fd270b86-5dd6-42f0-806a-8c7856d3ebeb is in state STARTED 2025-05-31 21:08:55.696881 | orchestrator | 2025-05-31 21:08:55 | INFO  | Task 8b1035ba-f6d8-4145-a2c3-c61acd0089bf is in state STARTED 2025-05-31 21:08:55.699318 | orchestrator | 2025-05-31 21:08:55 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:08:55.699463 | orchestrator | 2025-05-31 21:08:55 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:08:58.747798 | orchestrator | 2025-05-31 21:08:58 | INFO  | Task fdb100a2-c263-4307-b911-c35c8d55a0f1 is in state STARTED 2025-05-31 21:08:58.750118 | orchestrator | 2025-05-31 21:08:58 | INFO  | Task fd270b86-5dd6-42f0-806a-8c7856d3ebeb is in state STARTED 2025-05-31 21:08:58.753151 | orchestrator | 2025-05-31 21:08:58 | INFO  | Task 8b1035ba-f6d8-4145-a2c3-c61acd0089bf is in state STARTED 2025-05-31 21:08:58.755135 | orchestrator | 2025-05-31 21:08:58 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:08:58.755172 | orchestrator | 2025-05-31 21:08:58 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:09:01.807227 | orchestrator | 2025-05-31 21:09:01 | INFO  | Task fdb100a2-c263-4307-b911-c35c8d55a0f1 is in state STARTED 2025-05-31 21:09:01.809688 | orchestrator | 2025-05-31 21:09:01 | INFO  | Task fd270b86-5dd6-42f0-806a-8c7856d3ebeb is in state STARTED 2025-05-31 21:09:01.812480 | orchestrator | 2025-05-31 21:09:01 | INFO  | Task 8b1035ba-f6d8-4145-a2c3-c61acd0089bf is in state STARTED 2025-05-31 21:09:01.814740 | orchestrator | 2025-05-31 21:09:01 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:09:01.814790 | orchestrator | 2025-05-31 21:09:01 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:09:04.865109 | orchestrator | 2025-05-31 21:09:04 | INFO  | Task fdb100a2-c263-4307-b911-c35c8d55a0f1 is in state STARTED 2025-05-31 21:09:04.865218 | orchestrator | 2025-05-31 21:09:04 | INFO  | Task fd270b86-5dd6-42f0-806a-8c7856d3ebeb is in state STARTED 2025-05-31 21:09:04.866477 | orchestrator | 2025-05-31 21:09:04 | INFO  | Task 8b1035ba-f6d8-4145-a2c3-c61acd0089bf is in state STARTED 2025-05-31 21:09:04.867156 | orchestrator | 2025-05-31 21:09:04 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:09:04.867312 | orchestrator | 2025-05-31 21:09:04 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:09:07.927849 | orchestrator | 2025-05-31 21:09:07 | INFO  | Task fdb100a2-c263-4307-b911-c35c8d55a0f1 is in state STARTED 2025-05-31 21:09:07.928688 | orchestrator | 2025-05-31 21:09:07 | INFO  | Task fd270b86-5dd6-42f0-806a-8c7856d3ebeb is in state STARTED 2025-05-31 21:09:07.929606 | orchestrator | 2025-05-31 21:09:07 | INFO  | Task 8b1035ba-f6d8-4145-a2c3-c61acd0089bf is in state STARTED 2025-05-31 21:09:07.931968 | orchestrator | 2025-05-31 21:09:07 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:09:07.932022 | orchestrator | 2025-05-31 21:09:07 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:09:10.973236 | orchestrator | 2025-05-31 21:09:10 | INFO  | Task fdb100a2-c263-4307-b911-c35c8d55a0f1 is in state STARTED 2025-05-31 21:09:10.974262 | orchestrator | 2025-05-31 21:09:10 | INFO  | Task fd270b86-5dd6-42f0-806a-8c7856d3ebeb is in state STARTED 2025-05-31 21:09:10.976087 | orchestrator | 2025-05-31 21:09:10 | INFO  | Task 8b1035ba-f6d8-4145-a2c3-c61acd0089bf is in state STARTED 2025-05-31 21:09:10.977294 | orchestrator | 2025-05-31 21:09:10 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:09:10.977932 | orchestrator | 2025-05-31 21:09:10 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:09:14.026852 | orchestrator | 2025-05-31 21:09:14 | INFO  | Task fdb100a2-c263-4307-b911-c35c8d55a0f1 is in state STARTED 2025-05-31 21:09:14.027204 | orchestrator | 2025-05-31 21:09:14 | INFO  | Task fd270b86-5dd6-42f0-806a-8c7856d3ebeb is in state STARTED 2025-05-31 21:09:14.027786 | orchestrator | 2025-05-31 21:09:14 | INFO  | Task 8b1035ba-f6d8-4145-a2c3-c61acd0089bf is in state STARTED 2025-05-31 21:09:14.028522 | orchestrator | 2025-05-31 21:09:14 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:09:14.028696 | orchestrator | 2025-05-31 21:09:14 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:09:17.062106 | orchestrator | 2025-05-31 21:09:17 | INFO  | Task fdb100a2-c263-4307-b911-c35c8d55a0f1 is in state STARTED 2025-05-31 21:09:17.062507 | orchestrator | 2025-05-31 21:09:17 | INFO  | Task fd270b86-5dd6-42f0-806a-8c7856d3ebeb is in state STARTED 2025-05-31 21:09:17.063303 | orchestrator | 2025-05-31 21:09:17 | INFO  | Task 8b1035ba-f6d8-4145-a2c3-c61acd0089bf is in state STARTED 2025-05-31 21:09:17.064325 | orchestrator | 2025-05-31 21:09:17 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:09:17.064380 | orchestrator | 2025-05-31 21:09:17 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:09:20.101645 | orchestrator | 2025-05-31 21:09:20 | INFO  | Task fdb100a2-c263-4307-b911-c35c8d55a0f1 is in state STARTED 2025-05-31 21:09:20.103331 | orchestrator | 2025-05-31 21:09:20 | INFO  | Task fd270b86-5dd6-42f0-806a-8c7856d3ebeb is in state STARTED 2025-05-31 21:09:20.105549 | orchestrator | 2025-05-31 21:09:20 | INFO  | Task 8b1035ba-f6d8-4145-a2c3-c61acd0089bf is in state STARTED 2025-05-31 21:09:20.107279 | orchestrator | 2025-05-31 21:09:20 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:09:20.107537 | orchestrator | 2025-05-31 21:09:20 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:09:23.143844 | orchestrator | 2025-05-31 21:09:23 | INFO  | Task fdb100a2-c263-4307-b911-c35c8d55a0f1 is in state STARTED 2025-05-31 21:09:23.144091 | orchestrator | 2025-05-31 21:09:23 | INFO  | Task fd270b86-5dd6-42f0-806a-8c7856d3ebeb is in state STARTED 2025-05-31 21:09:23.148837 | orchestrator | 2025-05-31 21:09:23 | INFO  | Task 8b1035ba-f6d8-4145-a2c3-c61acd0089bf is in state STARTED 2025-05-31 21:09:23.148887 | orchestrator | 2025-05-31 21:09:23 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:09:23.148898 | orchestrator | 2025-05-31 21:09:23 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:09:26.197305 | orchestrator | 2025-05-31 21:09:26 | INFO  | Task fdb100a2-c263-4307-b911-c35c8d55a0f1 is in state STARTED 2025-05-31 21:09:26.201454 | orchestrator | 2025-05-31 21:09:26 | INFO  | Task fd270b86-5dd6-42f0-806a-8c7856d3ebeb is in state STARTED 2025-05-31 21:09:26.203550 | orchestrator | 2025-05-31 21:09:26 | INFO  | Task 8b1035ba-f6d8-4145-a2c3-c61acd0089bf is in state STARTED 2025-05-31 21:09:26.205342 | orchestrator | 2025-05-31 21:09:26 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:09:26.205376 | orchestrator | 2025-05-31 21:09:26 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:09:29.240176 | orchestrator | 2025-05-31 21:09:29 | INFO  | Task fdb100a2-c263-4307-b911-c35c8d55a0f1 is in state STARTED 2025-05-31 21:09:29.240328 | orchestrator | 2025-05-31 21:09:29 | INFO  | Task fd270b86-5dd6-42f0-806a-8c7856d3ebeb is in state STARTED 2025-05-31 21:09:29.243004 | orchestrator | 2025-05-31 21:09:29 | INFO  | Task 8b1035ba-f6d8-4145-a2c3-c61acd0089bf is in state STARTED 2025-05-31 21:09:29.243917 | orchestrator | 2025-05-31 21:09:29 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:09:29.244024 | orchestrator | 2025-05-31 21:09:29 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:09:32.288032 | orchestrator | 2025-05-31 21:09:32 | INFO  | Task fdb100a2-c263-4307-b911-c35c8d55a0f1 is in state STARTED 2025-05-31 21:09:32.290647 | orchestrator | 2025-05-31 21:09:32 | INFO  | Task fd270b86-5dd6-42f0-806a-8c7856d3ebeb is in state STARTED 2025-05-31 21:09:32.293011 | orchestrator | 2025-05-31 21:09:32 | INFO  | Task 8b1035ba-f6d8-4145-a2c3-c61acd0089bf is in state STARTED 2025-05-31 21:09:32.294392 | orchestrator | 2025-05-31 21:09:32 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:09:32.294435 | orchestrator | 2025-05-31 21:09:32 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:09:35.338718 | orchestrator | 2025-05-31 21:09:35 | INFO  | Task fdb100a2-c263-4307-b911-c35c8d55a0f1 is in state STARTED 2025-05-31 21:09:35.341444 | orchestrator | 2025-05-31 21:09:35 | INFO  | Task fd270b86-5dd6-42f0-806a-8c7856d3ebeb is in state STARTED 2025-05-31 21:09:35.342055 | orchestrator | 2025-05-31 21:09:35 | INFO  | Task 8b1035ba-f6d8-4145-a2c3-c61acd0089bf is in state STARTED 2025-05-31 21:09:35.343141 | orchestrator | 2025-05-31 21:09:35 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:09:35.343305 | orchestrator | 2025-05-31 21:09:35 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:09:38.390644 | orchestrator | 2025-05-31 21:09:38 | INFO  | Task fdb100a2-c263-4307-b911-c35c8d55a0f1 is in state STARTED 2025-05-31 21:09:38.390721 | orchestrator | 2025-05-31 21:09:38 | INFO  | Task fd270b86-5dd6-42f0-806a-8c7856d3ebeb is in state STARTED 2025-05-31 21:09:38.390728 | orchestrator | 2025-05-31 21:09:38 | INFO  | Task 8b1035ba-f6d8-4145-a2c3-c61acd0089bf is in state STARTED 2025-05-31 21:09:38.391262 | orchestrator | 2025-05-31 21:09:38 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:09:38.391274 | orchestrator | 2025-05-31 21:09:38 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:09:41.431610 | orchestrator | 2025-05-31 21:09:41 | INFO  | Task fdb100a2-c263-4307-b911-c35c8d55a0f1 is in state STARTED 2025-05-31 21:09:41.433513 | orchestrator | 2025-05-31 21:09:41 | INFO  | Task fd270b86-5dd6-42f0-806a-8c7856d3ebeb is in state STARTED 2025-05-31 21:09:41.435465 | orchestrator | 2025-05-31 21:09:41 | INFO  | Task 8b1035ba-f6d8-4145-a2c3-c61acd0089bf is in state STARTED 2025-05-31 21:09:41.437008 | orchestrator | 2025-05-31 21:09:41 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:09:41.437041 | orchestrator | 2025-05-31 21:09:41 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:09:44.483294 | orchestrator | 2025-05-31 21:09:44 | INFO  | Task fdb100a2-c263-4307-b911-c35c8d55a0f1 is in state STARTED 2025-05-31 21:09:44.483417 | orchestrator | 2025-05-31 21:09:44 | INFO  | Task fd270b86-5dd6-42f0-806a-8c7856d3ebeb is in state STARTED 2025-05-31 21:09:44.483972 | orchestrator | 2025-05-31 21:09:44 | INFO  | Task 8b1035ba-f6d8-4145-a2c3-c61acd0089bf is in state STARTED 2025-05-31 21:09:44.485833 | orchestrator | 2025-05-31 21:09:44 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:09:44.485962 | orchestrator | 2025-05-31 21:09:44 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:09:47.551974 | orchestrator | 2025-05-31 21:09:47.552151 | orchestrator | 2025-05-31 21:09:47 | INFO  | Task fdb100a2-c263-4307-b911-c35c8d55a0f1 is in state SUCCESS 2025-05-31 21:09:47.553756 | orchestrator | 2025-05-31 21:09:47.553801 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-31 21:09:47.553816 | orchestrator | 2025-05-31 21:09:47.553828 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-31 21:09:47.553885 | orchestrator | Saturday 31 May 2025 21:07:04 +0000 (0:00:00.294) 0:00:00.294 ********** 2025-05-31 21:09:47.553903 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:09:47.553923 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:09:47.553941 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:09:47.553960 | orchestrator | 2025-05-31 21:09:47.553977 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-31 21:09:47.553998 | orchestrator | Saturday 31 May 2025 21:07:04 +0000 (0:00:00.385) 0:00:00.680 ********** 2025-05-31 21:09:47.554084 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-05-31 21:09:47.554101 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-05-31 21:09:47.554113 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-05-31 21:09:47.554124 | orchestrator | 2025-05-31 21:09:47.554136 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-05-31 21:09:47.554147 | orchestrator | 2025-05-31 21:09:47.554157 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-05-31 21:09:47.554169 | orchestrator | Saturday 31 May 2025 21:07:05 +0000 (0:00:00.514) 0:00:01.194 ********** 2025-05-31 21:09:47.554180 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 21:09:47.554192 | orchestrator | 2025-05-31 21:09:47.554203 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-05-31 21:09:47.554214 | orchestrator | Saturday 31 May 2025 21:07:06 +0000 (0:00:00.977) 0:00:02.172 ********** 2025-05-31 21:09:47.554225 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-05-31 21:09:47.554235 | orchestrator | 2025-05-31 21:09:47.554246 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-05-31 21:09:47.554257 | orchestrator | Saturday 31 May 2025 21:07:10 +0000 (0:00:04.063) 0:00:06.235 ********** 2025-05-31 21:09:47.554268 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-05-31 21:09:47.554279 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-05-31 21:09:47.554290 | orchestrator | 2025-05-31 21:09:47.554301 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-05-31 21:09:47.554313 | orchestrator | Saturday 31 May 2025 21:07:16 +0000 (0:00:05.925) 0:00:12.161 ********** 2025-05-31 21:09:47.554324 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-05-31 21:09:47.554336 | orchestrator | 2025-05-31 21:09:47.554349 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-05-31 21:09:47.554362 | orchestrator | Saturday 31 May 2025 21:07:19 +0000 (0:00:03.249) 0:00:15.410 ********** 2025-05-31 21:09:47.554375 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-31 21:09:47.554413 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-05-31 21:09:47.554426 | orchestrator | 2025-05-31 21:09:47.554440 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-05-31 21:09:47.554453 | orchestrator | Saturday 31 May 2025 21:07:22 +0000 (0:00:03.358) 0:00:18.769 ********** 2025-05-31 21:09:47.554466 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-31 21:09:47.554478 | orchestrator | 2025-05-31 21:09:47.554491 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-05-31 21:09:47.554503 | orchestrator | Saturday 31 May 2025 21:07:26 +0000 (0:00:03.268) 0:00:22.038 ********** 2025-05-31 21:09:47.554516 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-05-31 21:09:47.554528 | orchestrator | 2025-05-31 21:09:47.554540 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-05-31 21:09:47.554553 | orchestrator | Saturday 31 May 2025 21:07:30 +0000 (0:00:04.725) 0:00:26.763 ********** 2025-05-31 21:09:47.554601 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-31 21:09:47.554621 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-31 21:09:47.554645 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-31 21:09:47.554659 | orchestrator | 2025-05-31 21:09:47.554672 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-05-31 21:09:47.554684 | orchestrator | Saturday 31 May 2025 21:07:36 +0000 (0:00:05.570) 0:00:32.334 ********** 2025-05-31 21:09:47.554695 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 21:09:47.554707 | orchestrator | 2025-05-31 21:09:47.554725 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-05-31 21:09:47.554736 | orchestrator | Saturday 31 May 2025 21:07:36 +0000 (0:00:00.492) 0:00:32.827 ********** 2025-05-31 21:09:47.554760 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:09:47.554779 | orchestrator | changed: [testbed-node-1] 2025-05-31 21:09:47.554798 | orchestrator | changed: [testbed-node-2] 2025-05-31 21:09:47.554817 | orchestrator | 2025-05-31 21:09:47.554835 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-05-31 21:09:47.554847 | orchestrator | Saturday 31 May 2025 21:07:40 +0000 (0:00:03.305) 0:00:36.132 ********** 2025-05-31 21:09:47.554905 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-31 21:09:47.554918 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-31 21:09:47.554929 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-31 21:09:47.554939 | orchestrator | 2025-05-31 21:09:47.554950 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-05-31 21:09:47.554960 | orchestrator | Saturday 31 May 2025 21:07:41 +0000 (0:00:01.308) 0:00:37.441 ********** 2025-05-31 21:09:47.554971 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-31 21:09:47.554982 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-31 21:09:47.555001 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-31 21:09:47.555011 | orchestrator | 2025-05-31 21:09:47.555022 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-05-31 21:09:47.555033 | orchestrator | Saturday 31 May 2025 21:07:42 +0000 (0:00:00.959) 0:00:38.401 ********** 2025-05-31 21:09:47.555044 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:09:47.555054 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:09:47.555065 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:09:47.555075 | orchestrator | 2025-05-31 21:09:47.555091 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-05-31 21:09:47.555108 | orchestrator | Saturday 31 May 2025 21:07:43 +0000 (0:00:00.639) 0:00:39.040 ********** 2025-05-31 21:09:47.555127 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:09:47.555146 | orchestrator | 2025-05-31 21:09:47.555165 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-05-31 21:09:47.555184 | orchestrator | Saturday 31 May 2025 21:07:43 +0000 (0:00:00.106) 0:00:39.147 ********** 2025-05-31 21:09:47.555202 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:09:47.555221 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:09:47.555239 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:09:47.555252 | orchestrator | 2025-05-31 21:09:47.555263 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-05-31 21:09:47.555274 | orchestrator | Saturday 31 May 2025 21:07:43 +0000 (0:00:00.242) 0:00:39.389 ********** 2025-05-31 21:09:47.555285 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 21:09:47.555296 | orchestrator | 2025-05-31 21:09:47.555307 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-05-31 21:09:47.555326 | orchestrator | Saturday 31 May 2025 21:07:43 +0000 (0:00:00.462) 0:00:39.852 ********** 2025-05-31 21:09:47.555357 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-31 21:09:47.555387 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-31 21:09:47.555422 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-31 21:09:47.555444 | orchestrator | 2025-05-31 21:09:47.555463 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-05-31 21:09:47.555482 | orchestrator | Saturday 31 May 2025 21:07:48 +0000 (0:00:04.681) 0:00:44.534 ********** 2025-05-31 21:09:47.555523 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-31 21:09:47.555548 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:09:47.555560 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-31 21:09:47.555572 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:09:47.555597 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-31 21:09:47.555616 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:09:47.555627 | orchestrator | 2025-05-31 21:09:47.555638 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-05-31 21:09:47.555648 | orchestrator | Saturday 31 May 2025 21:07:51 +0000 (0:00:02.661) 0:00:47.195 ********** 2025-05-31 21:09:47.555660 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-31 21:09:47.555671 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:09:47.555702 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-31 21:09:47.555734 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:09:47.555755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-31 21:09:47.555775 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:09:47.555795 | orchestrator | 2025-05-31 21:09:47.555813 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-05-31 21:09:47.555831 | orchestrator | Saturday 31 May 2025 21:07:54 +0000 (0:00:03.249) 0:00:50.445 ********** 2025-05-31 21:09:47.555850 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:09:47.555897 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:09:47.555916 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:09:47.555933 | orchestrator | 2025-05-31 21:09:47.555948 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-05-31 21:09:47.555962 | orchestrator | Saturday 31 May 2025 21:07:57 +0000 (0:00:02.955) 0:00:53.400 ********** 2025-05-31 21:09:47.555993 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-31 21:09:47.556041 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-31 21:09:47.556056 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-31 21:09:47.556068 | orchestrator | 2025-05-31 21:09:47.556138 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2025-05-31 21:09:47.556152 | orchestrator | Saturday 31 May 2025 21:08:00 +0000 (0:00:03.269) 0:00:56.670 ********** 2025-05-31 21:09:47.556163 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:09:47.556174 | orchestrator | changed: [testbed-node-2] 2025-05-31 21:09:47.556184 | orchestrator | changed: [testbed-node-1] 2025-05-31 21:09:47.556195 | orchestrator | 2025-05-31 21:09:47.556205 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2025-05-31 21:09:47.556216 | orchestrator | Saturday 31 May 2025 21:08:07 +0000 (0:00:07.031) 0:01:03.701 ********** 2025-05-31 21:09:47.556227 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:09:47.556237 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:09:47.556254 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:09:47.556274 | orchestrator | 2025-05-31 21:09:47.556291 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2025-05-31 21:09:47.556317 | orchestrator | Saturday 31 May 2025 21:08:12 +0000 (0:00:04.599) 0:01:08.300 ********** 2025-05-31 21:09:47.556335 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:09:47.556393 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:09:47.556421 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:09:47.556433 | orchestrator | 2025-05-31 21:09:47.556443 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2025-05-31 21:09:47.556454 | orchestrator | Saturday 31 May 2025 21:08:15 +0000 (0:00:03.431) 0:01:11.732 ********** 2025-05-31 21:09:47.556465 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:09:47.556475 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:09:47.556486 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:09:47.556496 | orchestrator | 2025-05-31 21:09:47.556507 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2025-05-31 21:09:47.556518 | orchestrator | Saturday 31 May 2025 21:08:19 +0000 (0:00:04.093) 0:01:15.826 ********** 2025-05-31 21:09:47.556529 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:09:47.556539 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:09:47.556549 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:09:47.556560 | orchestrator | 2025-05-31 21:09:47.556570 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2025-05-31 21:09:47.556581 | orchestrator | Saturday 31 May 2025 21:08:23 +0000 (0:00:03.371) 0:01:19.197 ********** 2025-05-31 21:09:47.556592 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:09:47.556602 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:09:47.556612 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:09:47.556623 | orchestrator | 2025-05-31 21:09:47.556633 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2025-05-31 21:09:47.556644 | orchestrator | Saturday 31 May 2025 21:08:23 +0000 (0:00:00.447) 0:01:19.644 ********** 2025-05-31 21:09:47.556655 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-05-31 21:09:47.556666 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:09:47.556676 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-05-31 21:09:47.556687 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:09:47.556698 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-05-31 21:09:47.556709 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:09:47.556719 | orchestrator | 2025-05-31 21:09:47.556729 | orchestrator | TASK [glance : Check glance containers] **************************************** 2025-05-31 21:09:47.556740 | orchestrator | Saturday 31 May 2025 21:08:27 +0000 (0:00:03.901) 0:01:23.546 ********** 2025-05-31 21:09:47.556752 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-31 21:09:47.556791 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-31 21:09:47.556805 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-31 21:09:47.556825 | orchestrator | 2025-05-31 21:09:47.556836 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-05-31 21:09:47.556846 | orchestrator | Saturday 31 May 2025 21:08:33 +0000 (0:00:06.154) 0:01:29.701 ********** 2025-05-31 21:09:47.557002 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:09:47.557045 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:09:47.557057 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:09:47.557067 | orchestrator | 2025-05-31 21:09:47.557078 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2025-05-31 21:09:47.557087 | orchestrator | Saturday 31 May 2025 21:08:34 +0000 (0:00:00.531) 0:01:30.232 ********** 2025-05-31 21:09:47.557097 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:09:47.557106 | orchestrator | 2025-05-31 21:09:47.557116 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2025-05-31 21:09:47.557125 | orchestrator | Saturday 31 May 2025 21:08:36 +0000 (0:00:02.098) 0:01:32.331 ********** 2025-05-31 21:09:47.557134 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:09:47.557144 | orchestrator | 2025-05-31 21:09:47.557153 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2025-05-31 21:09:47.557162 | orchestrator | Saturday 31 May 2025 21:08:38 +0000 (0:00:02.129) 0:01:34.460 ********** 2025-05-31 21:09:47.557172 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:09:47.557181 | orchestrator | 2025-05-31 21:09:47.557190 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2025-05-31 21:09:47.557200 | orchestrator | Saturday 31 May 2025 21:08:40 +0000 (0:00:02.160) 0:01:36.621 ********** 2025-05-31 21:09:47.557209 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:09:47.557218 | orchestrator | 2025-05-31 21:09:47.557228 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2025-05-31 21:09:47.557237 | orchestrator | Saturday 31 May 2025 21:09:09 +0000 (0:00:28.792) 0:02:05.413 ********** 2025-05-31 21:09:47.557247 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:09:47.557257 | orchestrator | 2025-05-31 21:09:47.557278 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-05-31 21:09:47.557288 | orchestrator | Saturday 31 May 2025 21:09:12 +0000 (0:00:03.301) 0:02:08.715 ********** 2025-05-31 21:09:47.557298 | orchestrator | 2025-05-31 21:09:47.557314 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-05-31 21:09:47.557324 | orchestrator | Saturday 31 May 2025 21:09:12 +0000 (0:00:00.179) 0:02:08.895 ********** 2025-05-31 21:09:47.557334 | orchestrator | 2025-05-31 21:09:47.557343 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-05-31 21:09:47.557352 | orchestrator | Saturday 31 May 2025 21:09:13 +0000 (0:00:00.123) 0:02:09.018 ********** 2025-05-31 21:09:47.557362 | orchestrator | 2025-05-31 21:09:47.557371 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2025-05-31 21:09:47.557381 | orchestrator | Saturday 31 May 2025 21:09:13 +0000 (0:00:00.091) 0:02:09.110 ********** 2025-05-31 21:09:47.557390 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:09:47.557400 | orchestrator | changed: [testbed-node-1] 2025-05-31 21:09:47.557409 | orchestrator | changed: [testbed-node-2] 2025-05-31 21:09:47.557418 | orchestrator | 2025-05-31 21:09:47.557428 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 21:09:47.557448 | orchestrator | testbed-node-0 : ok=26  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-05-31 21:09:47.557458 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-05-31 21:09:47.557468 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-05-31 21:09:47.557477 | orchestrator | 2025-05-31 21:09:47.557487 | orchestrator | 2025-05-31 21:09:47.557496 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-31 21:09:47.557506 | orchestrator | Saturday 31 May 2025 21:09:44 +0000 (0:00:31.237) 0:02:40.347 ********** 2025-05-31 21:09:47.557515 | orchestrator | =============================================================================== 2025-05-31 21:09:47.557525 | orchestrator | glance : Restart glance-api container ---------------------------------- 31.24s 2025-05-31 21:09:47.557534 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 28.79s 2025-05-31 21:09:47.557543 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 7.03s 2025-05-31 21:09:47.557553 | orchestrator | glance : Check glance containers ---------------------------------------- 6.15s 2025-05-31 21:09:47.557562 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 5.93s 2025-05-31 21:09:47.557571 | orchestrator | glance : Ensuring config directories exist ------------------------------ 5.57s 2025-05-31 21:09:47.557581 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.72s 2025-05-31 21:09:47.557590 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 4.68s 2025-05-31 21:09:47.557600 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 4.60s 2025-05-31 21:09:47.557609 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 4.09s 2025-05-31 21:09:47.557619 | orchestrator | service-ks-register : glance | Creating services ------------------------ 4.06s 2025-05-31 21:09:47.557628 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 3.90s 2025-05-31 21:09:47.557637 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 3.43s 2025-05-31 21:09:47.557647 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 3.37s 2025-05-31 21:09:47.557656 | orchestrator | service-ks-register : glance | Creating users --------------------------- 3.36s 2025-05-31 21:09:47.557665 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 3.31s 2025-05-31 21:09:47.557675 | orchestrator | glance : Disable log_bin_trust_function_creators function --------------- 3.30s 2025-05-31 21:09:47.557684 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 3.27s 2025-05-31 21:09:47.557694 | orchestrator | glance : Copying over config.json files for services -------------------- 3.27s 2025-05-31 21:09:47.557703 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 3.25s 2025-05-31 21:09:47.557888 | orchestrator | 2025-05-31 21:09:47 | INFO  | Task fd270b86-5dd6-42f0-806a-8c7856d3ebeb is in state STARTED 2025-05-31 21:09:47.557903 | orchestrator | 2025-05-31 21:09:47 | INFO  | Task 8b1035ba-f6d8-4145-a2c3-c61acd0089bf is in state STARTED 2025-05-31 21:09:47.557912 | orchestrator | 2025-05-31 21:09:47 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:09:47.557922 | orchestrator | 2025-05-31 21:09:47 | INFO  | Task 73eed275-4f68-4766-824f-c892735403da is in state STARTED 2025-05-31 21:09:47.557931 | orchestrator | 2025-05-31 21:09:47 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:09:50.606222 | orchestrator | 2025-05-31 21:09:50 | INFO  | Task fd270b86-5dd6-42f0-806a-8c7856d3ebeb is in state STARTED 2025-05-31 21:09:50.611144 | orchestrator | 2025-05-31 21:09:50 | INFO  | Task f2248baf-8912-4c2c-8aa6-44e3e92aedb2 is in state STARTED 2025-05-31 21:09:50.619270 | orchestrator | 2025-05-31 21:09:50 | INFO  | Task 8b1035ba-f6d8-4145-a2c3-c61acd0089bf is in state SUCCESS 2025-05-31 21:09:50.623490 | orchestrator | 2025-05-31 21:09:50.623580 | orchestrator | 2025-05-31 21:09:50.623595 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-31 21:09:50.623622 | orchestrator | 2025-05-31 21:09:50.623633 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-31 21:09:50.623659 | orchestrator | Saturday 31 May 2025 21:06:56 +0000 (0:00:00.273) 0:00:00.273 ********** 2025-05-31 21:09:50.623670 | orchestrator | ok: [testbed-manager] 2025-05-31 21:09:50.623680 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:09:50.623690 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:09:50.623699 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:09:50.623709 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:09:50.623718 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:09:50.623728 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:09:50.623737 | orchestrator | 2025-05-31 21:09:50.623747 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-31 21:09:50.623757 | orchestrator | Saturday 31 May 2025 21:06:57 +0000 (0:00:00.988) 0:00:01.262 ********** 2025-05-31 21:09:50.623767 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-05-31 21:09:50.623776 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-05-31 21:09:50.623786 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-05-31 21:09:50.623795 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-05-31 21:09:50.623805 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-05-31 21:09:50.623814 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-05-31 21:09:50.623824 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-05-31 21:09:50.623833 | orchestrator | 2025-05-31 21:09:50.623843 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-05-31 21:09:50.623852 | orchestrator | 2025-05-31 21:09:50.623896 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-05-31 21:09:50.623907 | orchestrator | Saturday 31 May 2025 21:06:58 +0000 (0:00:00.737) 0:00:01.999 ********** 2025-05-31 21:09:50.623927 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 21:09:50.623939 | orchestrator | 2025-05-31 21:09:50.623949 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-05-31 21:09:50.623959 | orchestrator | Saturday 31 May 2025 21:07:00 +0000 (0:00:01.539) 0:00:03.539 ********** 2025-05-31 21:09:50.623973 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-31 21:09:50.623987 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-31 21:09:50.624020 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-31 21:09:50.624031 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-31 21:09:50.624063 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-31 21:09:50.624077 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 21:09:50.624092 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-31 21:09:50.624104 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 21:09:50.624115 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-31 21:09:50.624127 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 21:09:50.624147 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-31 21:09:50.624158 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 21:09:50.624184 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-31 21:09:50.624200 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 21:09:50.624212 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-31 21:09:50.624224 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-31 21:09:50.624236 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 21:09:50.624254 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-31 21:09:50.624265 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 21:09:50.624287 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-31 21:09:50.624301 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-31 21:09:50.624313 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-31 21:09:50.624325 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-31 21:09:50.624342 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-31 21:09:50.624368 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-31 21:09:50.624386 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 21:09:50.624403 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-31 21:09:50.624439 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 21:09:50.624456 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 21:09:50.624474 | orchestrator | 2025-05-31 21:09:50.624490 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-05-31 21:09:50.624508 | orchestrator | Saturday 31 May 2025 21:07:03 +0000 (0:00:03.295) 0:00:06.835 ********** 2025-05-31 21:09:50.624525 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 21:09:50.624542 | orchestrator | 2025-05-31 21:09:50.624556 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-05-31 21:09:50.624573 | orchestrator | Saturday 31 May 2025 21:07:04 +0000 (0:00:01.493) 0:00:08.328 ********** 2025-05-31 21:09:50.624591 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-31 21:09:50.624620 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-31 21:09:50.624635 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-31 21:09:50.624652 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-31 21:09:50.624684 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-31 21:09:50.624701 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-31 21:09:50.624717 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-31 21:09:50.624733 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-31 21:09:50.624760 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 21:09:50.624778 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-31 21:09:50.624796 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 21:09:50.624814 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 21:09:50.624840 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-31 21:09:50.624929 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-31 21:09:50.624952 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-31 21:09:50.624969 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-31 21:09:50.625006 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 21:09:50.625024 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 21:09:50.625042 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 21:09:50.625068 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 21:09:50.625092 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-31 21:09:50.625109 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-31 21:09:50.625125 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-31 21:09:50.625150 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-31 21:09:50.625168 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-31 21:09:50.625185 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-31 21:09:50.625202 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 21:09:50.626408 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 21:09:50.626447 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 21:09:50.626458 | orchestrator | 2025-05-31 21:09:50.626468 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-05-31 21:09:50.626484 | orchestrator | Saturday 31 May 2025 21:07:11 +0000 (0:00:06.655) 0:00:14.983 ********** 2025-05-31 21:09:50.626518 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-31 21:09:50.626536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 21:09:50.626554 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 21:09:50.626573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-31 21:09:50.626591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 21:09:50.626618 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-31 21:09:50.626636 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-31 21:09:50.626646 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-31 21:09:50.626664 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-31 21:09:50.626676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-31 21:09:50.626736 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 21:09:50.626748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 21:09:50.626764 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 21:09:50.626779 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-31 21:09:50.626796 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 21:09:50.626806 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:09:50.626839 | orchestrator | skipping: [testbed-manager] 2025-05-31 21:09:50.626849 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:09:50.626883 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-31 21:09:50.626894 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-31 21:09:50.626905 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-31 21:09:50.626914 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:09:50.626926 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-31 21:09:50.626943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 21:09:50.626975 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 21:09:50.627005 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-31 21:09:50.627024 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 21:09:50.627043 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:09:50.627055 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-31 21:09:50.627067 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-31 21:09:50.627079 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-31 21:09:50.627166 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:09:50.627178 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-31 21:09:50.627194 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-31 21:09:50.627238 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-31 21:09:50.627255 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:09:50.627271 | orchestrator | 2025-05-31 21:09:50.627286 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-05-31 21:09:50.627301 | orchestrator | Saturday 31 May 2025 21:07:13 +0000 (0:00:01.681) 0:00:16.664 ********** 2025-05-31 21:09:50.627318 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-31 21:09:50.627334 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-31 21:09:50.627351 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-31 21:09:50.627368 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-31 21:09:50.627388 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 21:09:50.627419 | orchestrator | skipping: [testbed-manager] 2025-05-31 21:09:50.627447 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-31 21:09:50.627459 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 21:09:50.627469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 21:09:50.627485 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-31 21:09:50.627502 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 21:09:50.627518 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-31 21:09:50.627535 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 21:09:50.627570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 21:09:50.627603 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-31 21:09:50.627621 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 21:09:50.627637 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:09:50.627653 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:09:50.627669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-31 21:09:50.627686 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 21:09:50.627704 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 21:09:50.627721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-31 21:09:50.627737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-31 21:09:50.627766 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:09:50.627818 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-31 21:09:50.627847 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-31 21:09:50.627892 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-31 21:09:50.627910 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:09:50.627929 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-31 21:09:50.627945 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-31 21:09:50.627961 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-31 21:09:50.627978 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:09:50.627995 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-31 21:09:50.628026 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-31 21:09:50.628493 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-31 21:09:50.628526 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:09:50.628542 | orchestrator | 2025-05-31 21:09:50.628558 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-05-31 21:09:50.628575 | orchestrator | Saturday 31 May 2025 21:07:15 +0000 (0:00:01.855) 0:00:18.520 ********** 2025-05-31 21:09:50.628591 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-31 21:09:50.628609 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-31 21:09:50.628628 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-31 21:09:50.628645 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-31 21:09:50.628675 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-31 21:09:50.628686 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-31 21:09:50.628722 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-31 21:09:50.628739 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 21:09:50.628750 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-31 21:09:50.628760 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 21:09:50.628770 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-31 21:09:50.628781 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-31 21:09:50.628801 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 21:09:50.628812 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-31 21:09:50.628832 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-31 21:09:50.628843 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 21:09:50.628873 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 21:09:50.628883 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-31 21:09:50.628893 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 21:09:50.628911 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-31 21:09:50.628923 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-31 21:09:50.628938 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-31 21:09:50.628953 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-31 21:09:50.628964 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-31 21:09:50.628974 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-31 21:09:50.628984 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 21:09:50.629000 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 21:09:50.629010 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 21:09:50.629020 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 21:09:50.629030 | orchestrator | 2025-05-31 21:09:50.629040 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-05-31 21:09:50.629050 | orchestrator | Saturday 31 May 2025 21:07:20 +0000 (0:00:05.540) 0:00:24.060 ********** 2025-05-31 21:09:50.629060 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-31 21:09:50.629072 | orchestrator | 2025-05-31 21:09:50.629084 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-05-31 21:09:50.629099 | orchestrator | Saturday 31 May 2025 21:07:21 +0000 (0:00:00.874) 0:00:24.935 ********** 2025-05-31 21:09:50.629117 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098051, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.991082, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.629130 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098051, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.991082, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.629142 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098051, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.991082, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.629160 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1098038, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9870818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.629172 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098051, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.991082, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-31 21:09:50.629184 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098051, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.991082, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.629200 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098051, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.991082, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.629220 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1098038, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9870818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.629232 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1098018, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9830818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.629243 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098051, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.991082, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.629261 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1098038, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9870818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.629273 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1098038, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9870818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.629284 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1098018, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9830818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.629301 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1098038, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9870818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.629318 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1098038, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9870818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-31 21:09:50.629330 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1098038, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9870818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.629341 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1098021, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9830818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.629358 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1098018, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9830818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.629369 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1098021, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9830818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.629381 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1098018, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9830818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.629393 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1098018, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9830818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.629421 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1098018, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9830818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.629438 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1098035, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9860818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.629463 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1098018, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9830818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-31 21:09:50.629478 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1098021, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9830818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.629493 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1098035, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9860818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.629507 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1098021, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9830818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.629652 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1098021, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9830818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.629680 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1098021, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9830818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.629691 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1098035, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9860818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.629707 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1098026, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9850817, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.629717 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1098026, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9850817, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.629728 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1098021, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9830818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-31 21:09:50.629737 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1098035, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9860818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.629748 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1098035, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9860818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.629768 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1098035, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9860818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.629778 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1098026, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9850817, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.629794 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1098034, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9860818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.629804 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1098026, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9850817, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.629814 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1098034, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9860818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.629824 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1098040, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9880817, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.629834 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1098026, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9850817, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.629893 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1098026, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9850817, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.629912 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1098040, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9880817, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.629922 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1098035, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9860818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-31 21:09:50.629932 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1098034, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9860818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.629942 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1098049, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9900818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.629952 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1098034, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9860818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.629962 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1098034, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9860818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.629996 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1098034, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9860818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.630047 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1098040, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9880817, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.630060 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1098049, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9900818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.630070 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1098040, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9880817, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.630080 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1098065, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.994082, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.630090 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1098040, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9880817, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.630100 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1098049, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9900818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.630124 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1098040, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9880817, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.630145 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1098049, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9900818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.630156 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1098065, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.994082, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.630166 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1098049, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9900818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.630176 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1098065, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.994082, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.630186 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1098041, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9900818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.630196 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1098026, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9850817, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-31 21:09:50.630216 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1098049, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9900818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.630233 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1098065, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.994082, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.630243 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1098041, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9900818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.630253 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1098041, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9900818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.630263 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1098065, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.994082, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.630273 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098022, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9840817, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.630283 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1098065, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.994082, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.630307 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098022, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9840817, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.630318 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1098041, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9900818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.630328 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098022, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9840817, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.630375 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1098033, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9860818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.630386 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1098041, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9900818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.630397 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1098041, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9900818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.630407 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1098034, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9860818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-31 21:09:50.630434 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1098033, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9860818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.630445 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098022, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9840817, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.630456 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098017, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9830818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.630466 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1098033, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9860818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.630476 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098022, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9840817, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.630486 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1098033, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9860818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.630496 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098022, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9840817, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.630982 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098017, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9830818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.631002 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098017, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9830818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.631013 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1098036, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9870818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.631024 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1098040, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9880817, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-31 21:09:50.631034 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1098033, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9860818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.631044 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098017, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9830818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.631062 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1098036, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9870818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.631086 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1098062, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.994082, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.631097 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1098033, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9860818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.631107 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1098036, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9870818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.631117 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1098062, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.994082, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.631127 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1098032, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9850817, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.631137 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098017, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9830818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.631153 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1098036, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9870818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.631173 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1098054, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9920819, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.631183 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:09:50.631194 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098017, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9830818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.631204 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1098062, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.994082, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.631214 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1098036, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9870818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.631224 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1098049, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9900818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-31 21:09:50.631234 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1098032, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9850817, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.631249 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1098062, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.994082, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.631268 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1098062, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.994082, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.631279 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1098032, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9850817, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.631289 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1098036, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9870818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.631299 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1098054, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9920819, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.631309 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:09:50.631319 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1098054, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9920819, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.631340 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:09:50.631350 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1098032, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9850817, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.631360 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1098032, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9850817, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.631380 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1098062, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.994082, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.631390 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1098054, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9920819, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.631400 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:09:50.631410 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1098065, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.994082, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-31 21:09:50.631421 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1098054, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9920819, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.631430 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:09:50.631440 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1098032, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9850817, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.631457 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1098054, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9920819, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-31 21:09:50.631467 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:09:50.631477 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1098041, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9900818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-31 21:09:50.631497 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098022, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9840817, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-31 21:09:50.631508 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1098033, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9860818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-31 21:09:50.631518 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1098017, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9830818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-31 21:09:50.631528 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1098036, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9870818, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-31 21:09:50.631543 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1098062, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.994082, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-31 21:09:50.631553 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1098032, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9850817, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-31 21:09:50.631563 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1098054, 'dev': 104, 'nlink': 1, 'atime': 1748694465.0, 'mtime': 1748694465.0, 'ctime': 1748719757.9920819, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-31 21:09:50.631573 | orchestrator | 2025-05-31 21:09:50.631583 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-05-31 21:09:50.631596 | orchestrator | Saturday 31 May 2025 21:07:43 +0000 (0:00:21.840) 0:00:46.775 ********** 2025-05-31 21:09:50.631607 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-31 21:09:50.631619 | orchestrator | 2025-05-31 21:09:50.631634 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-05-31 21:09:50.631645 | orchestrator | Saturday 31 May 2025 21:07:43 +0000 (0:00:00.617) 0:00:47.393 ********** 2025-05-31 21:09:50.631656 | orchestrator | [WARNING]: Skipped 2025-05-31 21:09:50.631673 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-31 21:09:50.631685 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-05-31 21:09:50.631696 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-31 21:09:50.631707 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-05-31 21:09:50.631718 | orchestrator | [WARNING]: Skipped 2025-05-31 21:09:50.631730 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-31 21:09:50.631741 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-05-31 21:09:50.631752 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-31 21:09:50.631763 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-05-31 21:09:50.631775 | orchestrator | [WARNING]: Skipped 2025-05-31 21:09:50.631786 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-31 21:09:50.631796 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-05-31 21:09:50.631805 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-31 21:09:50.631815 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-05-31 21:09:50.631825 | orchestrator | [WARNING]: Skipped 2025-05-31 21:09:50.631835 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-31 21:09:50.631844 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-05-31 21:09:50.631882 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-31 21:09:50.631892 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-05-31 21:09:50.631902 | orchestrator | [WARNING]: Skipped 2025-05-31 21:09:50.631989 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-31 21:09:50.632000 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-05-31 21:09:50.632010 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-31 21:09:50.632020 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-05-31 21:09:50.632029 | orchestrator | [WARNING]: Skipped 2025-05-31 21:09:50.632039 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-31 21:09:50.632048 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-05-31 21:09:50.632058 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-31 21:09:50.632067 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-05-31 21:09:50.632077 | orchestrator | [WARNING]: Skipped 2025-05-31 21:09:50.632087 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-31 21:09:50.632096 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-05-31 21:09:50.632106 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-31 21:09:50.632115 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-05-31 21:09:50.632125 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-05-31 21:09:50.632134 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-31 21:09:50.632144 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-31 21:09:50.632154 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-05-31 21:09:50.632163 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-31 21:09:50.632173 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-31 21:09:50.632182 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-31 21:09:50.632191 | orchestrator | 2025-05-31 21:09:50.632201 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-05-31 21:09:50.632210 | orchestrator | Saturday 31 May 2025 21:07:46 +0000 (0:00:02.139) 0:00:49.532 ********** 2025-05-31 21:09:50.632220 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-31 21:09:50.632230 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:09:50.632240 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-31 21:09:50.632249 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-31 21:09:50.632259 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:09:50.632268 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:09:50.632277 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-31 21:09:50.632287 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:09:50.632296 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-31 21:09:50.632306 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:09:50.632315 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-31 21:09:50.632325 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:09:50.632334 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-05-31 21:09:50.632343 | orchestrator | 2025-05-31 21:09:50.632353 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-05-31 21:09:50.632362 | orchestrator | Saturday 31 May 2025 21:08:00 +0000 (0:00:14.893) 0:01:04.425 ********** 2025-05-31 21:09:50.632378 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-31 21:09:50.632396 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:09:50.632406 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-31 21:09:50.632421 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:09:50.632431 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-31 21:09:50.632440 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:09:50.632450 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-31 21:09:50.632460 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:09:50.632469 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-31 21:09:50.632479 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:09:50.632488 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-31 21:09:50.632498 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:09:50.632507 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-05-31 21:09:50.632517 | orchestrator | 2025-05-31 21:09:50.632548 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-05-31 21:09:50.632558 | orchestrator | Saturday 31 May 2025 21:08:04 +0000 (0:00:03.539) 0:01:07.964 ********** 2025-05-31 21:09:50.632567 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-31 21:09:50.632577 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:09:50.632587 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-31 21:09:50.632597 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-31 21:09:50.632607 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-05-31 21:09:50.632616 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-31 21:09:50.632628 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:09:50.632640 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:09:50.632650 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:09:50.632661 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-31 21:09:50.632672 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:09:50.632683 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-31 21:09:50.632694 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:09:50.632705 | orchestrator | 2025-05-31 21:09:50.632716 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-05-31 21:09:50.632727 | orchestrator | Saturday 31 May 2025 21:08:06 +0000 (0:00:02.041) 0:01:10.006 ********** 2025-05-31 21:09:50.632739 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-31 21:09:50.632750 | orchestrator | 2025-05-31 21:09:50.632761 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-05-31 21:09:50.632772 | orchestrator | Saturday 31 May 2025 21:08:07 +0000 (0:00:00.673) 0:01:10.679 ********** 2025-05-31 21:09:50.632783 | orchestrator | skipping: [testbed-manager] 2025-05-31 21:09:50.632793 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:09:50.632804 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:09:50.632815 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:09:50.632825 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:09:50.632836 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:09:50.632846 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:09:50.632920 | orchestrator | 2025-05-31 21:09:50.632933 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-05-31 21:09:50.632944 | orchestrator | Saturday 31 May 2025 21:08:07 +0000 (0:00:00.551) 0:01:11.230 ********** 2025-05-31 21:09:50.632954 | orchestrator | skipping: [testbed-manager] 2025-05-31 21:09:50.632965 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:09:50.632976 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:09:50.632986 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:09:50.632998 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:09:50.633007 | orchestrator | changed: [testbed-node-1] 2025-05-31 21:09:50.633016 | orchestrator | changed: [testbed-node-2] 2025-05-31 21:09:50.633026 | orchestrator | 2025-05-31 21:09:50.633035 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-05-31 21:09:50.633045 | orchestrator | Saturday 31 May 2025 21:08:10 +0000 (0:00:02.604) 0:01:13.834 ********** 2025-05-31 21:09:50.633054 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-31 21:09:50.633064 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-31 21:09:50.633074 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:09:50.633083 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-31 21:09:50.633093 | orchestrator | skipping: [testbed-manager] 2025-05-31 21:09:50.633102 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:09:50.633111 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-31 21:09:50.633121 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:09:50.633136 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-31 21:09:50.633147 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:09:50.633156 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-31 21:09:50.633170 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:09:50.633180 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-31 21:09:50.633189 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:09:50.633199 | orchestrator | 2025-05-31 21:09:50.633208 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-05-31 21:09:50.633218 | orchestrator | Saturday 31 May 2025 21:08:12 +0000 (0:00:01.703) 0:01:15.538 ********** 2025-05-31 21:09:50.633227 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-31 21:09:50.633237 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-05-31 21:09:50.633247 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:09:50.633256 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-31 21:09:50.633266 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:09:50.633276 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-31 21:09:50.633285 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:09:50.633295 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-31 21:09:50.633304 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:09:50.633314 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-31 21:09:50.633323 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:09:50.633333 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-31 21:09:50.633342 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:09:50.633358 | orchestrator | 2025-05-31 21:09:50.633367 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-05-31 21:09:50.633377 | orchestrator | Saturday 31 May 2025 21:08:13 +0000 (0:00:01.835) 0:01:17.374 ********** 2025-05-31 21:09:50.633387 | orchestrator | [WARNING]: Skipped 2025-05-31 21:09:50.633397 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-05-31 21:09:50.633406 | orchestrator | due to this access issue: 2025-05-31 21:09:50.633416 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-05-31 21:09:50.633425 | orchestrator | not a directory 2025-05-31 21:09:50.633435 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-31 21:09:50.633444 | orchestrator | 2025-05-31 21:09:50.633453 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-05-31 21:09:50.633463 | orchestrator | Saturday 31 May 2025 21:08:14 +0000 (0:00:00.904) 0:01:18.278 ********** 2025-05-31 21:09:50.633472 | orchestrator | skipping: [testbed-manager] 2025-05-31 21:09:50.633482 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:09:50.633491 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:09:50.633501 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:09:50.633511 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:09:50.633520 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:09:50.633530 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:09:50.633539 | orchestrator | 2025-05-31 21:09:50.633548 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-05-31 21:09:50.633558 | orchestrator | Saturday 31 May 2025 21:08:15 +0000 (0:00:00.655) 0:01:18.934 ********** 2025-05-31 21:09:50.633567 | orchestrator | skipping: [testbed-manager] 2025-05-31 21:09:50.633577 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:09:50.633586 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:09:50.633596 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:09:50.633605 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:09:50.633614 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:09:50.633624 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:09:50.633633 | orchestrator | 2025-05-31 21:09:50.633643 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-05-31 21:09:50.633652 | orchestrator | Saturday 31 May 2025 21:08:16 +0000 (0:00:00.621) 0:01:19.556 ********** 2025-05-31 21:09:50.633663 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-31 21:09:50.633684 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-31 21:09:50.633696 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-31 21:09:50.633712 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-31 21:09:50.633722 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-31 21:09:50.633733 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-31 21:09:50.633743 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-31 21:09:50.633753 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-31 21:09:50.633763 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-31 21:09:50.633783 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 21:09:50.633794 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 21:09:50.633814 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 21:09:50.633825 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-31 21:09:50.633835 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-31 21:09:50.633845 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-31 21:09:50.633877 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-31 21:09:50.633899 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-31 21:09:50.633918 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 21:09:50.633929 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 21:09:50.633939 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-31 21:09:50.633949 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 21:09:50.633959 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-31 21:09:50.633969 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 21:09:50.633979 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-31 21:09:50.633999 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-31 21:09:50.634067 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-31 21:09:50.634081 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 21:09:50.634091 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 21:09:50.634101 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-31 21:09:50.634111 | orchestrator | 2025-05-31 21:09:50.634121 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-05-31 21:09:50.634131 | orchestrator | Saturday 31 May 2025 21:08:20 +0000 (0:00:04.492) 0:01:24.049 ********** 2025-05-31 21:09:50.634141 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-05-31 21:09:50.634151 | orchestrator | skipping: [testbed-manager] 2025-05-31 21:09:50.634160 | orchestrator | 2025-05-31 21:09:50.634170 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-31 21:09:50.634180 | orchestrator | Saturday 31 May 2025 21:08:21 +0000 (0:00:01.391) 0:01:25.440 ********** 2025-05-31 21:09:50.634189 | orchestrator | 2025-05-31 21:09:50.634199 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-31 21:09:50.634208 | orchestrator | Saturday 31 May 2025 21:08:22 +0000 (0:00:00.134) 0:01:25.575 ********** 2025-05-31 21:09:50.634218 | orchestrator | 2025-05-31 21:09:50.634227 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-31 21:09:50.634237 | orchestrator | Saturday 31 May 2025 21:08:22 +0000 (0:00:00.097) 0:01:25.672 ********** 2025-05-31 21:09:50.634246 | orchestrator | 2025-05-31 21:09:50.634255 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-31 21:09:50.634265 | orchestrator | Saturday 31 May 2025 21:08:22 +0000 (0:00:00.107) 0:01:25.779 ********** 2025-05-31 21:09:50.634274 | orchestrator | 2025-05-31 21:09:50.634284 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-31 21:09:50.634300 | orchestrator | Saturday 31 May 2025 21:08:22 +0000 (0:00:00.156) 0:01:25.936 ********** 2025-05-31 21:09:50.634310 | orchestrator | 2025-05-31 21:09:50.634319 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-31 21:09:50.634329 | orchestrator | Saturday 31 May 2025 21:08:22 +0000 (0:00:00.308) 0:01:26.245 ********** 2025-05-31 21:09:50.634338 | orchestrator | 2025-05-31 21:09:50.634348 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-31 21:09:50.634358 | orchestrator | Saturday 31 May 2025 21:08:22 +0000 (0:00:00.048) 0:01:26.293 ********** 2025-05-31 21:09:50.634367 | orchestrator | 2025-05-31 21:09:50.634377 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-05-31 21:09:50.634386 | orchestrator | Saturday 31 May 2025 21:08:22 +0000 (0:00:00.063) 0:01:26.356 ********** 2025-05-31 21:09:50.634396 | orchestrator | changed: [testbed-manager] 2025-05-31 21:09:50.634406 | orchestrator | 2025-05-31 21:09:50.634415 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-05-31 21:09:50.634430 | orchestrator | Saturday 31 May 2025 21:08:36 +0000 (0:00:13.480) 0:01:39.837 ********** 2025-05-31 21:09:50.634440 | orchestrator | changed: [testbed-manager] 2025-05-31 21:09:50.634449 | orchestrator | changed: [testbed-node-2] 2025-05-31 21:09:50.634459 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:09:50.634473 | orchestrator | changed: [testbed-node-4] 2025-05-31 21:09:50.634483 | orchestrator | changed: [testbed-node-1] 2025-05-31 21:09:50.634493 | orchestrator | changed: [testbed-node-3] 2025-05-31 21:09:50.634502 | orchestrator | changed: [testbed-node-5] 2025-05-31 21:09:50.634511 | orchestrator | 2025-05-31 21:09:50.634521 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-05-31 21:09:50.634531 | orchestrator | Saturday 31 May 2025 21:08:46 +0000 (0:00:09.829) 0:01:49.667 ********** 2025-05-31 21:09:50.634540 | orchestrator | changed: [testbed-node-2] 2025-05-31 21:09:50.634550 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:09:50.634559 | orchestrator | changed: [testbed-node-1] 2025-05-31 21:09:50.634569 | orchestrator | 2025-05-31 21:09:50.634578 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-05-31 21:09:50.634588 | orchestrator | Saturday 31 May 2025 21:08:56 +0000 (0:00:10.349) 0:02:00.016 ********** 2025-05-31 21:09:50.634597 | orchestrator | changed: [testbed-node-1] 2025-05-31 21:09:50.634607 | orchestrator | changed: [testbed-node-2] 2025-05-31 21:09:50.634616 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:09:50.634626 | orchestrator | 2025-05-31 21:09:50.634635 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-05-31 21:09:50.634644 | orchestrator | Saturday 31 May 2025 21:09:06 +0000 (0:00:10.203) 0:02:10.219 ********** 2025-05-31 21:09:50.634654 | orchestrator | changed: [testbed-node-1] 2025-05-31 21:09:50.634664 | orchestrator | changed: [testbed-node-2] 2025-05-31 21:09:50.634673 | orchestrator | changed: [testbed-node-3] 2025-05-31 21:09:50.634682 | orchestrator | changed: [testbed-node-5] 2025-05-31 21:09:50.634692 | orchestrator | changed: [testbed-node-4] 2025-05-31 21:09:50.634701 | orchestrator | changed: [testbed-manager] 2025-05-31 21:09:50.634710 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:09:50.634720 | orchestrator | 2025-05-31 21:09:50.634730 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-05-31 21:09:50.634739 | orchestrator | Saturday 31 May 2025 21:09:20 +0000 (0:00:13.531) 0:02:23.751 ********** 2025-05-31 21:09:50.634749 | orchestrator | changed: [testbed-manager] 2025-05-31 21:09:50.634758 | orchestrator | 2025-05-31 21:09:50.634768 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-05-31 21:09:50.634777 | orchestrator | Saturday 31 May 2025 21:09:26 +0000 (0:00:06.405) 0:02:30.157 ********** 2025-05-31 21:09:50.634787 | orchestrator | changed: [testbed-node-1] 2025-05-31 21:09:50.634796 | orchestrator | changed: [testbed-node-2] 2025-05-31 21:09:50.634806 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:09:50.634815 | orchestrator | 2025-05-31 21:09:50.634825 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-05-31 21:09:50.634842 | orchestrator | Saturday 31 May 2025 21:09:38 +0000 (0:00:11.352) 0:02:41.509 ********** 2025-05-31 21:09:50.634852 | orchestrator | changed: [testbed-manager] 2025-05-31 21:09:50.634878 | orchestrator | 2025-05-31 21:09:50.634888 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-05-31 21:09:50.634898 | orchestrator | Saturday 31 May 2025 21:09:42 +0000 (0:00:04.809) 0:02:46.318 ********** 2025-05-31 21:09:50.634907 | orchestrator | changed: [testbed-node-3] 2025-05-31 21:09:50.634917 | orchestrator | changed: [testbed-node-4] 2025-05-31 21:09:50.634926 | orchestrator | changed: [testbed-node-5] 2025-05-31 21:09:50.634936 | orchestrator | 2025-05-31 21:09:50.634945 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 21:09:50.634955 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-05-31 21:09:50.634966 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-05-31 21:09:50.634977 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-05-31 21:09:50.634987 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-05-31 21:09:50.634996 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-05-31 21:09:50.635006 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-05-31 21:09:50.635015 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-05-31 21:09:50.635025 | orchestrator | 2025-05-31 21:09:50.635034 | orchestrator | 2025-05-31 21:09:50.635044 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-31 21:09:50.635054 | orchestrator | Saturday 31 May 2025 21:09:48 +0000 (0:00:05.427) 0:02:51.746 ********** 2025-05-31 21:09:50.635063 | orchestrator | =============================================================================== 2025-05-31 21:09:50.635073 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 21.84s 2025-05-31 21:09:50.635082 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 14.89s 2025-05-31 21:09:50.635092 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 13.53s 2025-05-31 21:09:50.635102 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 13.48s 2025-05-31 21:09:50.635111 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 11.35s 2025-05-31 21:09:50.635126 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 10.35s 2025-05-31 21:09:50.635135 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 10.20s 2025-05-31 21:09:50.635150 | orchestrator | prometheus : Restart prometheus-node-exporter container ----------------- 9.83s 2025-05-31 21:09:50.635160 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 6.66s 2025-05-31 21:09:50.635170 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 6.41s 2025-05-31 21:09:50.635179 | orchestrator | prometheus : Copying over config.json files ----------------------------- 5.54s 2025-05-31 21:09:50.635189 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container -------------- 5.43s 2025-05-31 21:09:50.635198 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 4.81s 2025-05-31 21:09:50.635208 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.49s 2025-05-31 21:09:50.635218 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 3.54s 2025-05-31 21:09:50.635234 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.30s 2025-05-31 21:09:50.635244 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.60s 2025-05-31 21:09:50.635253 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 2.14s 2025-05-31 21:09:50.635262 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 2.04s 2025-05-31 21:09:50.635272 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 1.86s 2025-05-31 21:09:50.635281 | orchestrator | 2025-05-31 21:09:50 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:09:50.635291 | orchestrator | 2025-05-31 21:09:50 | INFO  | Task 73eed275-4f68-4766-824f-c892735403da is in state STARTED 2025-05-31 21:09:50.635301 | orchestrator | 2025-05-31 21:09:50 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:09:53.679626 | orchestrator | 2025-05-31 21:09:53 | INFO  | Task fd270b86-5dd6-42f0-806a-8c7856d3ebeb is in state STARTED 2025-05-31 21:09:53.679734 | orchestrator | 2025-05-31 21:09:53 | INFO  | Task f2248baf-8912-4c2c-8aa6-44e3e92aedb2 is in state STARTED 2025-05-31 21:09:53.681059 | orchestrator | 2025-05-31 21:09:53 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:09:53.682306 | orchestrator | 2025-05-31 21:09:53 | INFO  | Task 73eed275-4f68-4766-824f-c892735403da is in state STARTED 2025-05-31 21:09:53.682366 | orchestrator | 2025-05-31 21:09:53 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:09:56.724013 | orchestrator | 2025-05-31 21:09:56 | INFO  | Task fd270b86-5dd6-42f0-806a-8c7856d3ebeb is in state STARTED 2025-05-31 21:09:56.724942 | orchestrator | 2025-05-31 21:09:56 | INFO  | Task f2248baf-8912-4c2c-8aa6-44e3e92aedb2 is in state STARTED 2025-05-31 21:09:56.728137 | orchestrator | 2025-05-31 21:09:56 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:09:56.730430 | orchestrator | 2025-05-31 21:09:56 | INFO  | Task 73eed275-4f68-4766-824f-c892735403da is in state STARTED 2025-05-31 21:09:56.730518 | orchestrator | 2025-05-31 21:09:56 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:09:59.765211 | orchestrator | 2025-05-31 21:09:59 | INFO  | Task fd270b86-5dd6-42f0-806a-8c7856d3ebeb is in state STARTED 2025-05-31 21:09:59.766903 | orchestrator | 2025-05-31 21:09:59 | INFO  | Task f2248baf-8912-4c2c-8aa6-44e3e92aedb2 is in state STARTED 2025-05-31 21:09:59.768024 | orchestrator | 2025-05-31 21:09:59 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:09:59.771069 | orchestrator | 2025-05-31 21:09:59 | INFO  | Task 73eed275-4f68-4766-824f-c892735403da is in state STARTED 2025-05-31 21:09:59.771148 | orchestrator | 2025-05-31 21:09:59 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:10:02.815481 | orchestrator | 2025-05-31 21:10:02 | INFO  | Task fd270b86-5dd6-42f0-806a-8c7856d3ebeb is in state STARTED 2025-05-31 21:10:02.817993 | orchestrator | 2025-05-31 21:10:02 | INFO  | Task f2248baf-8912-4c2c-8aa6-44e3e92aedb2 is in state STARTED 2025-05-31 21:10:02.820447 | orchestrator | 2025-05-31 21:10:02 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:10:02.821771 | orchestrator | 2025-05-31 21:10:02 | INFO  | Task 73eed275-4f68-4766-824f-c892735403da is in state STARTED 2025-05-31 21:10:02.821805 | orchestrator | 2025-05-31 21:10:02 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:10:05.862012 | orchestrator | 2025-05-31 21:10:05 | INFO  | Task fd270b86-5dd6-42f0-806a-8c7856d3ebeb is in state STARTED 2025-05-31 21:10:05.863800 | orchestrator | 2025-05-31 21:10:05 | INFO  | Task f2248baf-8912-4c2c-8aa6-44e3e92aedb2 is in state STARTED 2025-05-31 21:10:05.865512 | orchestrator | 2025-05-31 21:10:05 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:10:05.867239 | orchestrator | 2025-05-31 21:10:05 | INFO  | Task 73eed275-4f68-4766-824f-c892735403da is in state STARTED 2025-05-31 21:10:05.867304 | orchestrator | 2025-05-31 21:10:05 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:10:08.910989 | orchestrator | 2025-05-31 21:10:08 | INFO  | Task fd270b86-5dd6-42f0-806a-8c7856d3ebeb is in state STARTED 2025-05-31 21:10:08.912608 | orchestrator | 2025-05-31 21:10:08 | INFO  | Task f2248baf-8912-4c2c-8aa6-44e3e92aedb2 is in state STARTED 2025-05-31 21:10:08.914497 | orchestrator | 2025-05-31 21:10:08 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:10:08.916087 | orchestrator | 2025-05-31 21:10:08 | INFO  | Task 73eed275-4f68-4766-824f-c892735403da is in state STARTED 2025-05-31 21:10:08.916107 | orchestrator | 2025-05-31 21:10:08 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:10:11.959703 | orchestrator | 2025-05-31 21:10:11 | INFO  | Task fd270b86-5dd6-42f0-806a-8c7856d3ebeb is in state STARTED 2025-05-31 21:10:11.961160 | orchestrator | 2025-05-31 21:10:11 | INFO  | Task f2248baf-8912-4c2c-8aa6-44e3e92aedb2 is in state STARTED 2025-05-31 21:10:11.963691 | orchestrator | 2025-05-31 21:10:11 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:10:11.967606 | orchestrator | 2025-05-31 21:10:11 | INFO  | Task 73eed275-4f68-4766-824f-c892735403da is in state STARTED 2025-05-31 21:10:11.967670 | orchestrator | 2025-05-31 21:10:11 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:10:15.010295 | orchestrator | 2025-05-31 21:10:15 | INFO  | Task fd270b86-5dd6-42f0-806a-8c7856d3ebeb is in state STARTED 2025-05-31 21:10:15.011632 | orchestrator | 2025-05-31 21:10:15 | INFO  | Task f2248baf-8912-4c2c-8aa6-44e3e92aedb2 is in state STARTED 2025-05-31 21:10:15.015178 | orchestrator | 2025-05-31 21:10:15 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:10:15.018838 | orchestrator | 2025-05-31 21:10:15 | INFO  | Task 73eed275-4f68-4766-824f-c892735403da is in state STARTED 2025-05-31 21:10:15.018917 | orchestrator | 2025-05-31 21:10:15 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:10:18.061027 | orchestrator | 2025-05-31 21:10:18 | INFO  | Task fd270b86-5dd6-42f0-806a-8c7856d3ebeb is in state STARTED 2025-05-31 21:10:18.062570 | orchestrator | 2025-05-31 21:10:18 | INFO  | Task f2248baf-8912-4c2c-8aa6-44e3e92aedb2 is in state STARTED 2025-05-31 21:10:18.064689 | orchestrator | 2025-05-31 21:10:18 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:10:18.066294 | orchestrator | 2025-05-31 21:10:18 | INFO  | Task 73eed275-4f68-4766-824f-c892735403da is in state STARTED 2025-05-31 21:10:18.066359 | orchestrator | 2025-05-31 21:10:18 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:10:21.101068 | orchestrator | 2025-05-31 21:10:21 | INFO  | Task fd270b86-5dd6-42f0-806a-8c7856d3ebeb is in state STARTED 2025-05-31 21:10:21.101891 | orchestrator | 2025-05-31 21:10:21 | INFO  | Task f2248baf-8912-4c2c-8aa6-44e3e92aedb2 is in state STARTED 2025-05-31 21:10:21.104848 | orchestrator | 2025-05-31 21:10:21 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:10:21.106910 | orchestrator | 2025-05-31 21:10:21 | INFO  | Task 73eed275-4f68-4766-824f-c892735403da is in state STARTED 2025-05-31 21:10:21.107267 | orchestrator | 2025-05-31 21:10:21 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:10:24.148421 | orchestrator | 2025-05-31 21:10:24 | INFO  | Task fd270b86-5dd6-42f0-806a-8c7856d3ebeb is in state STARTED 2025-05-31 21:10:24.149408 | orchestrator | 2025-05-31 21:10:24 | INFO  | Task f2248baf-8912-4c2c-8aa6-44e3e92aedb2 is in state STARTED 2025-05-31 21:10:24.149457 | orchestrator | 2025-05-31 21:10:24 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:10:24.151094 | orchestrator | 2025-05-31 21:10:24 | INFO  | Task 73eed275-4f68-4766-824f-c892735403da is in state STARTED 2025-05-31 21:10:24.151132 | orchestrator | 2025-05-31 21:10:24 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:10:27.185129 | orchestrator | 2025-05-31 21:10:27 | INFO  | Task fd270b86-5dd6-42f0-806a-8c7856d3ebeb is in state STARTED 2025-05-31 21:10:27.186412 | orchestrator | 2025-05-31 21:10:27 | INFO  | Task f2248baf-8912-4c2c-8aa6-44e3e92aedb2 is in state STARTED 2025-05-31 21:10:27.188283 | orchestrator | 2025-05-31 21:10:27 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:10:27.188837 | orchestrator | 2025-05-31 21:10:27 | INFO  | Task 73eed275-4f68-4766-824f-c892735403da is in state STARTED 2025-05-31 21:10:27.189532 | orchestrator | 2025-05-31 21:10:27 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:10:30.228939 | orchestrator | 2025-05-31 21:10:30 | INFO  | Task fd270b86-5dd6-42f0-806a-8c7856d3ebeb is in state STARTED 2025-05-31 21:10:30.232108 | orchestrator | 2025-05-31 21:10:30 | INFO  | Task f2248baf-8912-4c2c-8aa6-44e3e92aedb2 is in state STARTED 2025-05-31 21:10:30.232149 | orchestrator | 2025-05-31 21:10:30 | INFO  | Task bffac9c9-278e-47c1-bee7-d4a855a74701 is in state STARTED 2025-05-31 21:10:30.232160 | orchestrator | 2025-05-31 21:10:30 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:10:30.233644 | orchestrator | 2025-05-31 21:10:30 | INFO  | Task 73eed275-4f68-4766-824f-c892735403da is in state STARTED 2025-05-31 21:10:30.233773 | orchestrator | 2025-05-31 21:10:30 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:10:33.272984 | orchestrator | 2025-05-31 21:10:33 | INFO  | Task fd270b86-5dd6-42f0-806a-8c7856d3ebeb is in state STARTED 2025-05-31 21:10:33.275464 | orchestrator | 2025-05-31 21:10:33 | INFO  | Task f2248baf-8912-4c2c-8aa6-44e3e92aedb2 is in state STARTED 2025-05-31 21:10:33.275832 | orchestrator | 2025-05-31 21:10:33 | INFO  | Task bffac9c9-278e-47c1-bee7-d4a855a74701 is in state STARTED 2025-05-31 21:10:33.276414 | orchestrator | 2025-05-31 21:10:33 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:10:33.278098 | orchestrator | 2025-05-31 21:10:33 | INFO  | Task 73eed275-4f68-4766-824f-c892735403da is in state STARTED 2025-05-31 21:10:33.278187 | orchestrator | 2025-05-31 21:10:33 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:10:36.313952 | orchestrator | 2025-05-31 21:10:36 | INFO  | Task fd270b86-5dd6-42f0-806a-8c7856d3ebeb is in state STARTED 2025-05-31 21:10:36.317631 | orchestrator | 2025-05-31 21:10:36 | INFO  | Task f2248baf-8912-4c2c-8aa6-44e3e92aedb2 is in state STARTED 2025-05-31 21:10:36.318169 | orchestrator | 2025-05-31 21:10:36 | INFO  | Task bffac9c9-278e-47c1-bee7-d4a855a74701 is in state STARTED 2025-05-31 21:10:36.319551 | orchestrator | 2025-05-31 21:10:36 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:10:36.320308 | orchestrator | 2025-05-31 21:10:36 | INFO  | Task 73eed275-4f68-4766-824f-c892735403da is in state STARTED 2025-05-31 21:10:36.320942 | orchestrator | 2025-05-31 21:10:36 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:10:39.353903 | orchestrator | 2025-05-31 21:10:39 | INFO  | Task fd270b86-5dd6-42f0-806a-8c7856d3ebeb is in state STARTED 2025-05-31 21:10:39.355026 | orchestrator | 2025-05-31 21:10:39 | INFO  | Task f2248baf-8912-4c2c-8aa6-44e3e92aedb2 is in state STARTED 2025-05-31 21:10:39.355752 | orchestrator | 2025-05-31 21:10:39 | INFO  | Task bffac9c9-278e-47c1-bee7-d4a855a74701 is in state STARTED 2025-05-31 21:10:39.356463 | orchestrator | 2025-05-31 21:10:39 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:10:39.357251 | orchestrator | 2025-05-31 21:10:39 | INFO  | Task 73eed275-4f68-4766-824f-c892735403da is in state STARTED 2025-05-31 21:10:39.357282 | orchestrator | 2025-05-31 21:10:39 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:10:42.387936 | orchestrator | 2025-05-31 21:10:42 | INFO  | Task fd270b86-5dd6-42f0-806a-8c7856d3ebeb is in state STARTED 2025-05-31 21:10:42.388039 | orchestrator | 2025-05-31 21:10:42 | INFO  | Task f2248baf-8912-4c2c-8aa6-44e3e92aedb2 is in state STARTED 2025-05-31 21:10:42.388056 | orchestrator | 2025-05-31 21:10:42 | INFO  | Task bffac9c9-278e-47c1-bee7-d4a855a74701 is in state STARTED 2025-05-31 21:10:42.388068 | orchestrator | 2025-05-31 21:10:42 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:10:42.392279 | orchestrator | 2025-05-31 21:10:42 | INFO  | Task 73eed275-4f68-4766-824f-c892735403da is in state STARTED 2025-05-31 21:10:42.392318 | orchestrator | 2025-05-31 21:10:42 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:10:45.429804 | orchestrator | 2025-05-31 21:10:45 | INFO  | Task fd270b86-5dd6-42f0-806a-8c7856d3ebeb is in state STARTED 2025-05-31 21:10:45.430076 | orchestrator | 2025-05-31 21:10:45 | INFO  | Task f2248baf-8912-4c2c-8aa6-44e3e92aedb2 is in state STARTED 2025-05-31 21:10:45.430261 | orchestrator | 2025-05-31 21:10:45 | INFO  | Task bffac9c9-278e-47c1-bee7-d4a855a74701 is in state STARTED 2025-05-31 21:10:45.431465 | orchestrator | 2025-05-31 21:10:45 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:10:45.431806 | orchestrator | 2025-05-31 21:10:45 | INFO  | Task 73eed275-4f68-4766-824f-c892735403da is in state STARTED 2025-05-31 21:10:45.431895 | orchestrator | 2025-05-31 21:10:45 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:10:48.465952 | orchestrator | 2025-05-31 21:10:48 | INFO  | Task fd270b86-5dd6-42f0-806a-8c7856d3ebeb is in state STARTED 2025-05-31 21:10:48.466120 | orchestrator | 2025-05-31 21:10:48 | INFO  | Task f2248baf-8912-4c2c-8aa6-44e3e92aedb2 is in state STARTED 2025-05-31 21:10:48.466370 | orchestrator | 2025-05-31 21:10:48 | INFO  | Task bffac9c9-278e-47c1-bee7-d4a855a74701 is in state SUCCESS 2025-05-31 21:10:48.467035 | orchestrator | 2025-05-31 21:10:48 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:10:48.467753 | orchestrator | 2025-05-31 21:10:48 | INFO  | Task 73eed275-4f68-4766-824f-c892735403da is in state STARTED 2025-05-31 21:10:48.467946 | orchestrator | 2025-05-31 21:10:48 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:10:51.495361 | orchestrator | 2025-05-31 21:10:51 | INFO  | Task fd270b86-5dd6-42f0-806a-8c7856d3ebeb is in state STARTED 2025-05-31 21:10:51.495452 | orchestrator | 2025-05-31 21:10:51 | INFO  | Task f2248baf-8912-4c2c-8aa6-44e3e92aedb2 is in state STARTED 2025-05-31 21:10:51.496066 | orchestrator | 2025-05-31 21:10:51 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:10:51.496650 | orchestrator | 2025-05-31 21:10:51 | INFO  | Task 73eed275-4f68-4766-824f-c892735403da is in state STARTED 2025-05-31 21:10:51.496746 | orchestrator | 2025-05-31 21:10:51 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:10:54.524680 | orchestrator | 2025-05-31 21:10:54 | INFO  | Task fd270b86-5dd6-42f0-806a-8c7856d3ebeb is in state STARTED 2025-05-31 21:10:54.528357 | orchestrator | 2025-05-31 21:10:54 | INFO  | Task f2248baf-8912-4c2c-8aa6-44e3e92aedb2 is in state STARTED 2025-05-31 21:10:54.528410 | orchestrator | 2025-05-31 21:10:54 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:10:54.529116 | orchestrator | 2025-05-31 21:10:54 | INFO  | Task 73eed275-4f68-4766-824f-c892735403da is in state STARTED 2025-05-31 21:10:54.529146 | orchestrator | 2025-05-31 21:10:54 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:10:57.557403 | orchestrator | 2025-05-31 21:10:57 | INFO  | Task fd270b86-5dd6-42f0-806a-8c7856d3ebeb is in state STARTED 2025-05-31 21:10:57.557494 | orchestrator | 2025-05-31 21:10:57 | INFO  | Task f2248baf-8912-4c2c-8aa6-44e3e92aedb2 is in state STARTED 2025-05-31 21:10:57.558099 | orchestrator | 2025-05-31 21:10:57 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:10:57.559128 | orchestrator | 2025-05-31 21:10:57 | INFO  | Task 73eed275-4f68-4766-824f-c892735403da is in state STARTED 2025-05-31 21:10:57.559212 | orchestrator | 2025-05-31 21:10:57 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:11:00.593233 | orchestrator | 2025-05-31 21:11:00 | INFO  | Task fd270b86-5dd6-42f0-806a-8c7856d3ebeb is in state STARTED 2025-05-31 21:11:00.593579 | orchestrator | 2025-05-31 21:11:00 | INFO  | Task f2248baf-8912-4c2c-8aa6-44e3e92aedb2 is in state STARTED 2025-05-31 21:11:00.595617 | orchestrator | 2025-05-31 21:11:00 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:11:00.596239 | orchestrator | 2025-05-31 21:11:00 | INFO  | Task 73eed275-4f68-4766-824f-c892735403da is in state STARTED 2025-05-31 21:11:00.596262 | orchestrator | 2025-05-31 21:11:00 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:11:03.622837 | orchestrator | 2025-05-31 21:11:03 | INFO  | Task fd270b86-5dd6-42f0-806a-8c7856d3ebeb is in state STARTED 2025-05-31 21:11:03.623037 | orchestrator | 2025-05-31 21:11:03 | INFO  | Task f2248baf-8912-4c2c-8aa6-44e3e92aedb2 is in state STARTED 2025-05-31 21:11:03.623408 | orchestrator | 2025-05-31 21:11:03 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:11:03.624104 | orchestrator | 2025-05-31 21:11:03 | INFO  | Task 73eed275-4f68-4766-824f-c892735403da is in state STARTED 2025-05-31 21:11:03.624129 | orchestrator | 2025-05-31 21:11:03 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:11:06.661817 | orchestrator | 2025-05-31 21:11:06 | INFO  | Task fd270b86-5dd6-42f0-806a-8c7856d3ebeb is in state STARTED 2025-05-31 21:11:06.663414 | orchestrator | 2025-05-31 21:11:06 | INFO  | Task f2248baf-8912-4c2c-8aa6-44e3e92aedb2 is in state STARTED 2025-05-31 21:11:06.663883 | orchestrator | 2025-05-31 21:11:06 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:11:06.665253 | orchestrator | 2025-05-31 21:11:06 | INFO  | Task 73eed275-4f68-4766-824f-c892735403da is in state STARTED 2025-05-31 21:11:06.665374 | orchestrator | 2025-05-31 21:11:06 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:11:09.696674 | orchestrator | 2025-05-31 21:11:09 | INFO  | Task fd270b86-5dd6-42f0-806a-8c7856d3ebeb is in state STARTED 2025-05-31 21:11:09.697991 | orchestrator | 2025-05-31 21:11:09 | INFO  | Task f2248baf-8912-4c2c-8aa6-44e3e92aedb2 is in state STARTED 2025-05-31 21:11:09.700021 | orchestrator | 2025-05-31 21:11:09 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:11:09.701770 | orchestrator | 2025-05-31 21:11:09 | INFO  | Task 73eed275-4f68-4766-824f-c892735403da is in state STARTED 2025-05-31 21:11:09.702105 | orchestrator | 2025-05-31 21:11:09 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:11:12.735623 | orchestrator | 2025-05-31 21:11:12 | INFO  | Task fd270b86-5dd6-42f0-806a-8c7856d3ebeb is in state STARTED 2025-05-31 21:11:12.736493 | orchestrator | 2025-05-31 21:11:12 | INFO  | Task f2248baf-8912-4c2c-8aa6-44e3e92aedb2 is in state STARTED 2025-05-31 21:11:12.738372 | orchestrator | 2025-05-31 21:11:12 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:11:12.740512 | orchestrator | 2025-05-31 21:11:12 | INFO  | Task 73eed275-4f68-4766-824f-c892735403da is in state STARTED 2025-05-31 21:11:12.740573 | orchestrator | 2025-05-31 21:11:12 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:11:15.773612 | orchestrator | 2025-05-31 21:11:15 | INFO  | Task fd270b86-5dd6-42f0-806a-8c7856d3ebeb is in state SUCCESS 2025-05-31 21:11:15.777366 | orchestrator | 2025-05-31 21:11:15.777439 | orchestrator | None 2025-05-31 21:11:15.777462 | orchestrator | 2025-05-31 21:11:15.777483 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-31 21:11:15.777503 | orchestrator | 2025-05-31 21:11:15.777522 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-31 21:11:15.777539 | orchestrator | Saturday 31 May 2025 21:07:29 +0000 (0:00:00.470) 0:00:00.470 ********** 2025-05-31 21:11:15.777551 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:11:15.777563 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:11:15.777574 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:11:15.777585 | orchestrator | ok: [testbed-node-3] 2025-05-31 21:11:15.777595 | orchestrator | ok: [testbed-node-4] 2025-05-31 21:11:15.777606 | orchestrator | ok: [testbed-node-5] 2025-05-31 21:11:15.777618 | orchestrator | 2025-05-31 21:11:15.777629 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-31 21:11:15.777640 | orchestrator | Saturday 31 May 2025 21:07:30 +0000 (0:00:01.203) 0:00:01.674 ********** 2025-05-31 21:11:15.777651 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-05-31 21:11:15.777662 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-05-31 21:11:15.777673 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-05-31 21:11:15.777684 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-05-31 21:11:15.777694 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-05-31 21:11:15.777705 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-05-31 21:11:15.777716 | orchestrator | 2025-05-31 21:11:15.777726 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-05-31 21:11:15.777737 | orchestrator | 2025-05-31 21:11:15.777748 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-05-31 21:11:15.777758 | orchestrator | Saturday 31 May 2025 21:07:31 +0000 (0:00:01.383) 0:00:03.058 ********** 2025-05-31 21:11:15.777770 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 21:11:15.777782 | orchestrator | 2025-05-31 21:11:15.777793 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-05-31 21:11:15.777810 | orchestrator | Saturday 31 May 2025 21:07:34 +0000 (0:00:02.417) 0:00:05.475 ********** 2025-05-31 21:11:15.777829 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-05-31 21:11:15.777847 | orchestrator | 2025-05-31 21:11:15.777956 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-05-31 21:11:15.777975 | orchestrator | Saturday 31 May 2025 21:07:37 +0000 (0:00:03.494) 0:00:08.969 ********** 2025-05-31 21:11:15.777996 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-05-31 21:11:15.778144 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-05-31 21:11:15.778164 | orchestrator | 2025-05-31 21:11:15.778181 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-05-31 21:11:15.778198 | orchestrator | Saturday 31 May 2025 21:07:43 +0000 (0:00:05.489) 0:00:14.459 ********** 2025-05-31 21:11:15.778214 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-31 21:11:15.778232 | orchestrator | 2025-05-31 21:11:15.778243 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-05-31 21:11:15.778253 | orchestrator | Saturday 31 May 2025 21:07:45 +0000 (0:00:02.839) 0:00:17.298 ********** 2025-05-31 21:11:15.778262 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-31 21:11:15.778273 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-05-31 21:11:15.778289 | orchestrator | 2025-05-31 21:11:15.778306 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-05-31 21:11:15.778338 | orchestrator | Saturday 31 May 2025 21:07:49 +0000 (0:00:03.860) 0:00:21.158 ********** 2025-05-31 21:11:15.778356 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-31 21:11:15.778372 | orchestrator | 2025-05-31 21:11:15.778388 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-05-31 21:11:15.778399 | orchestrator | Saturday 31 May 2025 21:07:52 +0000 (0:00:03.254) 0:00:24.413 ********** 2025-05-31 21:11:15.778408 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-05-31 21:11:15.778418 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-05-31 21:11:15.778427 | orchestrator | 2025-05-31 21:11:15.778437 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-05-31 21:11:15.778446 | orchestrator | Saturday 31 May 2025 21:07:59 +0000 (0:00:06.916) 0:00:31.329 ********** 2025-05-31 21:11:15.778488 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-31 21:11:15.778501 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-31 21:11:15.778511 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-31 21:11:15.778533 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-31 21:11:15.778550 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-31 21:11:15.778560 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-31 21:11:15.778582 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-31 21:11:15.778593 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-31 21:11:15.778610 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-31 21:11:15.778625 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-31 21:11:15.778636 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-31 21:11:15.778646 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-31 21:11:15.778656 | orchestrator | 2025-05-31 21:11:15.778672 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-05-31 21:11:15.778682 | orchestrator | Saturday 31 May 2025 21:08:02 +0000 (0:00:02.105) 0:00:33.434 ********** 2025-05-31 21:11:15.778692 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:11:15.778703 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:11:15.778719 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:11:15.778735 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:11:15.778752 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:11:15.778767 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:11:15.778781 | orchestrator | 2025-05-31 21:11:15.778815 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-05-31 21:11:15.778833 | orchestrator | Saturday 31 May 2025 21:08:02 +0000 (0:00:00.977) 0:00:34.412 ********** 2025-05-31 21:11:15.778885 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:11:15.778905 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:11:15.778924 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:11:15.778942 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 21:11:15.778960 | orchestrator | 2025-05-31 21:11:15.778976 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-05-31 21:11:15.778993 | orchestrator | Saturday 31 May 2025 21:08:03 +0000 (0:00:00.912) 0:00:35.325 ********** 2025-05-31 21:11:15.779011 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-05-31 21:11:15.779029 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-05-31 21:11:15.779047 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-05-31 21:11:15.779063 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-05-31 21:11:15.779082 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-05-31 21:11:15.779099 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-05-31 21:11:15.779112 | orchestrator | 2025-05-31 21:11:15.779122 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-05-31 21:11:15.779132 | orchestrator | Saturday 31 May 2025 21:08:06 +0000 (0:00:02.327) 0:00:37.653 ********** 2025-05-31 21:11:15.779144 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-31 21:11:15.779162 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-31 21:11:15.779173 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-31 21:11:15.779193 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-31 21:11:15.779212 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-31 21:11:15.779223 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-31 21:11:15.779244 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-31 21:11:15.779261 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-31 21:11:15.779296 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-31 21:11:15.779312 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-31 21:11:15.779328 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-31 21:11:15.779352 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-31 21:11:15.779370 | orchestrator | 2025-05-31 21:11:15.779387 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-05-31 21:11:15.779403 | orchestrator | Saturday 31 May 2025 21:08:09 +0000 (0:00:03.132) 0:00:40.786 ********** 2025-05-31 21:11:15.779420 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-05-31 21:11:15.779438 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-05-31 21:11:15.779454 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-05-31 21:11:15.779466 | orchestrator | 2025-05-31 21:11:15.779476 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-05-31 21:11:15.779494 | orchestrator | Saturday 31 May 2025 21:08:11 +0000 (0:00:01.868) 0:00:42.654 ********** 2025-05-31 21:11:15.779504 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-05-31 21:11:15.779513 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-05-31 21:11:15.779523 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-05-31 21:11:15.779532 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-05-31 21:11:15.779542 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-05-31 21:11:15.779558 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-05-31 21:11:15.779567 | orchestrator | 2025-05-31 21:11:15.779577 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-05-31 21:11:15.779586 | orchestrator | Saturday 31 May 2025 21:08:14 +0000 (0:00:03.119) 0:00:45.773 ********** 2025-05-31 21:11:15.779596 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-05-31 21:11:15.779606 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-05-31 21:11:15.779616 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-05-31 21:11:15.779625 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-05-31 21:11:15.779635 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-05-31 21:11:15.779644 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-05-31 21:11:15.779654 | orchestrator | 2025-05-31 21:11:15.779663 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-05-31 21:11:15.779673 | orchestrator | Saturday 31 May 2025 21:08:15 +0000 (0:00:01.110) 0:00:46.883 ********** 2025-05-31 21:11:15.779682 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:11:15.779692 | orchestrator | 2025-05-31 21:11:15.779701 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-05-31 21:11:15.779711 | orchestrator | Saturday 31 May 2025 21:08:15 +0000 (0:00:00.148) 0:00:47.031 ********** 2025-05-31 21:11:15.779720 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:11:15.779729 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:11:15.779739 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:11:15.779748 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:11:15.779757 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:11:15.779767 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:11:15.779776 | orchestrator | 2025-05-31 21:11:15.779785 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-05-31 21:11:15.779795 | orchestrator | Saturday 31 May 2025 21:08:16 +0000 (0:00:00.597) 0:00:47.628 ********** 2025-05-31 21:11:15.779806 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-31 21:11:15.779817 | orchestrator | 2025-05-31 21:11:15.779826 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-05-31 21:11:15.779835 | orchestrator | Saturday 31 May 2025 21:08:18 +0000 (0:00:01.794) 0:00:49.423 ********** 2025-05-31 21:11:15.779846 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-31 21:11:15.779896 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-31 21:11:15.779915 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-31 21:11:15.779925 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-31 21:11:15.779936 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-31 21:11:15.779954 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-31 21:11:15.779979 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-31 21:11:15.779997 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-31 21:11:15.780023 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-31 21:11:15.780039 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-31 21:11:15.780049 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-31 21:11:15.780059 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-31 21:11:15.780078 | orchestrator | 2025-05-31 21:11:15.780089 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-05-31 21:11:15.780111 | orchestrator | Saturday 31 May 2025 21:08:21 +0000 (0:00:03.191) 0:00:52.615 ********** 2025-05-31 21:11:15.780129 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-31 21:11:15.780147 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-31 21:11:15.780157 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-31 21:11:15.780167 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-31 21:11:15.780177 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-31 21:11:15.780198 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-31 21:11:15.780209 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:11:15.780225 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:11:15.780242 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:11:15.780259 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-31 21:11:15.780284 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-31 21:11:15.780300 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:11:15.780319 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-31 21:11:15.780330 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-31 21:11:15.780347 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:11:15.780362 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-31 21:11:15.780373 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-31 21:11:15.780387 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:11:15.780404 | orchestrator | 2025-05-31 21:11:15.780419 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-05-31 21:11:15.780435 | orchestrator | Saturday 31 May 2025 21:08:22 +0000 (0:00:01.652) 0:00:54.268 ********** 2025-05-31 21:11:15.780459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-31 21:11:15.780475 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-31 21:11:15.780490 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:11:15.780514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-31 21:11:15.780535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-31 21:11:15.780550 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:11:15.780565 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-31 21:11:15.780589 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-31 21:11:15.780603 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:11:15.780620 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-31 21:11:15.780645 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-31 21:11:15.780661 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:11:15.780682 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-31 21:11:15.780699 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-31 21:11:15.780714 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:11:15.780738 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-31 21:11:15.780755 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-31 21:11:15.780773 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:11:15.780798 | orchestrator | 2025-05-31 21:11:15.780815 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-05-31 21:11:15.780830 | orchestrator | Saturday 31 May 2025 21:08:24 +0000 (0:00:01.423) 0:00:55.691 ********** 2025-05-31 21:11:15.780847 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-31 21:11:15.780894 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-31 21:11:15.780913 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-31 21:11:15.780936 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-31 21:11:15.780947 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-31 21:11:15.780968 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-31 21:11:15.780984 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-31 21:11:15.780995 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-31 21:11:15.781011 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-31 21:11:15.781021 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-31 21:11:15.781037 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-31 21:11:15.781047 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-31 21:11:15.781057 | orchestrator | 2025-05-31 21:11:15.781067 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-05-31 21:11:15.781077 | orchestrator | Saturday 31 May 2025 21:08:27 +0000 (0:00:03.193) 0:00:58.885 ********** 2025-05-31 21:11:15.781087 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-05-31 21:11:15.781097 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-05-31 21:11:15.781107 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:11:15.781116 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-05-31 21:11:15.781126 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:11:15.781139 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-05-31 21:11:15.781149 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-05-31 21:11:15.781158 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:11:15.781168 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-05-31 21:11:15.781177 | orchestrator | 2025-05-31 21:11:15.781187 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-05-31 21:11:15.781196 | orchestrator | Saturday 31 May 2025 21:08:29 +0000 (0:00:02.446) 0:01:01.331 ********** 2025-05-31 21:11:15.781206 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-31 21:11:15.781222 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-31 21:11:15.781238 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-31 21:11:15.781249 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-31 21:11:15.781263 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-31 21:11:15.781350 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-31 21:11:15.781369 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-31 21:11:15.781379 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-31 21:11:15.781390 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-31 21:11:15.781400 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-31 21:11:15.781414 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-31 21:11:15.781425 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-31 21:11:15.781440 | orchestrator | 2025-05-31 21:11:15.781450 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-05-31 21:11:15.781460 | orchestrator | Saturday 31 May 2025 21:08:40 +0000 (0:00:10.288) 0:01:11.619 ********** 2025-05-31 21:11:15.781475 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:11:15.781485 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:11:15.781494 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:11:15.781504 | orchestrator | changed: [testbed-node-3] 2025-05-31 21:11:15.781513 | orchestrator | changed: [testbed-node-4] 2025-05-31 21:11:15.781525 | orchestrator | changed: [testbed-node-5] 2025-05-31 21:11:15.781543 | orchestrator | 2025-05-31 21:11:15.781561 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-05-31 21:11:15.781577 | orchestrator | Saturday 31 May 2025 21:08:43 +0000 (0:00:03.205) 0:01:14.825 ********** 2025-05-31 21:11:15.781595 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-31 21:11:15.781622 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-31 21:11:15.781640 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:11:15.781665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-31 21:11:15.781681 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-31 21:11:15.781700 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:11:15.781718 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-31 21:11:15.781729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-31 21:11:15.781739 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:11:15.781749 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-31 21:11:15.781759 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-31 21:11:15.781769 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:11:15.781784 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-31 21:11:15.781808 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-31 21:11:15.781818 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:11:15.781834 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-31 21:11:15.781845 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-31 21:11:15.781892 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:11:15.781904 | orchestrator | 2025-05-31 21:11:15.781913 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-05-31 21:11:15.781923 | orchestrator | Saturday 31 May 2025 21:08:44 +0000 (0:00:01.385) 0:01:16.211 ********** 2025-05-31 21:11:15.781933 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:11:15.781942 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:11:15.781952 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:11:15.781961 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:11:15.781970 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:11:15.781980 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:11:15.781989 | orchestrator | 2025-05-31 21:11:15.781998 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-05-31 21:11:15.782007 | orchestrator | Saturday 31 May 2025 21:08:45 +0000 (0:00:00.622) 0:01:16.833 ********** 2025-05-31 21:11:15.782055 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-31 21:11:15.782077 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-31 21:11:15.782094 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-31 21:11:15.782105 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-31 21:11:15.782115 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-31 21:11:15.782133 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-31 21:11:15.782149 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-31 21:11:15.782166 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-31 21:11:15.782176 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-31 21:11:15.782186 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-31 21:11:15.782196 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-31 21:11:15.782211 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-31 21:11:15.782227 | orchestrator | 2025-05-31 21:11:15.782237 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-05-31 21:11:15.782246 | orchestrator | Saturday 31 May 2025 21:08:47 +0000 (0:00:02.028) 0:01:18.861 ********** 2025-05-31 21:11:15.782256 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:11:15.782266 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:11:15.782275 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:11:15.782284 | orchestrator | skipping: [testbed-node-3] 2025-05-31 21:11:15.782294 | orchestrator | skipping: [testbed-node-4] 2025-05-31 21:11:15.782303 | orchestrator | skipping: [testbed-node-5] 2025-05-31 21:11:15.782313 | orchestrator | 2025-05-31 21:11:15.782396 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-05-31 21:11:15.782408 | orchestrator | Saturday 31 May 2025 21:08:48 +0000 (0:00:00.669) 0:01:19.531 ********** 2025-05-31 21:11:15.782417 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:11:15.782427 | orchestrator | 2025-05-31 21:11:15.782436 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-05-31 21:11:15.782446 | orchestrator | Saturday 31 May 2025 21:08:50 +0000 (0:00:01.911) 0:01:21.443 ********** 2025-05-31 21:11:15.782456 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:11:15.782465 | orchestrator | 2025-05-31 21:11:15.782474 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-05-31 21:11:15.782484 | orchestrator | Saturday 31 May 2025 21:08:52 +0000 (0:00:02.034) 0:01:23.478 ********** 2025-05-31 21:11:15.782493 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:11:15.782503 | orchestrator | 2025-05-31 21:11:15.782512 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-31 21:11:15.782521 | orchestrator | Saturday 31 May 2025 21:09:09 +0000 (0:00:17.163) 0:01:40.641 ********** 2025-05-31 21:11:15.782531 | orchestrator | 2025-05-31 21:11:15.782546 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-31 21:11:15.782556 | orchestrator | Saturday 31 May 2025 21:09:09 +0000 (0:00:00.166) 0:01:40.808 ********** 2025-05-31 21:11:15.782566 | orchestrator | 2025-05-31 21:11:15.782575 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-31 21:11:15.782585 | orchestrator | Saturday 31 May 2025 21:09:09 +0000 (0:00:00.155) 0:01:40.964 ********** 2025-05-31 21:11:15.782594 | orchestrator | 2025-05-31 21:11:15.782604 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-31 21:11:15.782614 | orchestrator | Saturday 31 May 2025 21:09:09 +0000 (0:00:00.174) 0:01:41.138 ********** 2025-05-31 21:11:15.782623 | orchestrator | 2025-05-31 21:11:15.782632 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-31 21:11:15.782642 | orchestrator | Saturday 31 May 2025 21:09:09 +0000 (0:00:00.216) 0:01:41.355 ********** 2025-05-31 21:11:15.782651 | orchestrator | 2025-05-31 21:11:15.782661 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-31 21:11:15.782670 | orchestrator | Saturday 31 May 2025 21:09:10 +0000 (0:00:00.147) 0:01:41.502 ********** 2025-05-31 21:11:15.782679 | orchestrator | 2025-05-31 21:11:15.782689 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-05-31 21:11:15.782706 | orchestrator | Saturday 31 May 2025 21:09:10 +0000 (0:00:00.097) 0:01:41.599 ********** 2025-05-31 21:11:15.782723 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:11:15.782751 | orchestrator | changed: [testbed-node-2] 2025-05-31 21:11:15.782768 | orchestrator | changed: [testbed-node-1] 2025-05-31 21:11:15.782785 | orchestrator | 2025-05-31 21:11:15.782804 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-05-31 21:11:15.782824 | orchestrator | Saturday 31 May 2025 21:09:35 +0000 (0:00:25.018) 0:02:06.618 ********** 2025-05-31 21:11:15.782842 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:11:15.782879 | orchestrator | changed: [testbed-node-2] 2025-05-31 21:11:15.782889 | orchestrator | changed: [testbed-node-1] 2025-05-31 21:11:15.782898 | orchestrator | 2025-05-31 21:11:15.782908 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-05-31 21:11:15.782917 | orchestrator | Saturday 31 May 2025 21:09:44 +0000 (0:00:09.714) 0:02:16.333 ********** 2025-05-31 21:11:15.782927 | orchestrator | changed: [testbed-node-3] 2025-05-31 21:11:15.782936 | orchestrator | changed: [testbed-node-5] 2025-05-31 21:11:15.782946 | orchestrator | changed: [testbed-node-4] 2025-05-31 21:11:15.782955 | orchestrator | 2025-05-31 21:11:15.782964 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-05-31 21:11:15.782974 | orchestrator | Saturday 31 May 2025 21:11:03 +0000 (0:01:18.578) 0:03:34.912 ********** 2025-05-31 21:11:15.782984 | orchestrator | changed: [testbed-node-3] 2025-05-31 21:11:15.782993 | orchestrator | changed: [testbed-node-5] 2025-05-31 21:11:15.783002 | orchestrator | changed: [testbed-node-4] 2025-05-31 21:11:15.783011 | orchestrator | 2025-05-31 21:11:15.783021 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-05-31 21:11:15.783031 | orchestrator | Saturday 31 May 2025 21:11:14 +0000 (0:00:10.965) 0:03:45.878 ********** 2025-05-31 21:11:15.783040 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:11:15.783050 | orchestrator | 2025-05-31 21:11:15.783059 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 21:11:15.783069 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-05-31 21:11:15.783080 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-05-31 21:11:15.783096 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-05-31 21:11:15.783106 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-05-31 21:11:15.783116 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-05-31 21:11:15.783125 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-05-31 21:11:15.783135 | orchestrator | 2025-05-31 21:11:15.783144 | orchestrator | 2025-05-31 21:11:15.783154 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-31 21:11:15.783163 | orchestrator | Saturday 31 May 2025 21:11:14 +0000 (0:00:00.526) 0:03:46.404 ********** 2025-05-31 21:11:15.783173 | orchestrator | =============================================================================== 2025-05-31 21:11:15.783182 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 78.58s 2025-05-31 21:11:15.783192 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 25.02s 2025-05-31 21:11:15.783201 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 17.16s 2025-05-31 21:11:15.783210 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 10.97s 2025-05-31 21:11:15.783220 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 10.29s 2025-05-31 21:11:15.783229 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 9.71s 2025-05-31 21:11:15.783246 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 6.92s 2025-05-31 21:11:15.783255 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 5.49s 2025-05-31 21:11:15.783271 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 3.86s 2025-05-31 21:11:15.783281 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.49s 2025-05-31 21:11:15.783291 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.25s 2025-05-31 21:11:15.783300 | orchestrator | cinder : Generating 'hostnqn' file for cinder_volume -------------------- 3.21s 2025-05-31 21:11:15.783310 | orchestrator | cinder : Copying over config.json files for services -------------------- 3.19s 2025-05-31 21:11:15.783319 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 3.19s 2025-05-31 21:11:15.783328 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.13s 2025-05-31 21:11:15.783338 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.12s 2025-05-31 21:11:15.783347 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 2.84s 2025-05-31 21:11:15.783356 | orchestrator | cinder : Copying over cinder-wsgi.conf ---------------------------------- 2.45s 2025-05-31 21:11:15.783366 | orchestrator | cinder : include_tasks -------------------------------------------------- 2.42s 2025-05-31 21:11:15.783375 | orchestrator | cinder : Ensuring cinder service ceph config subdirs exists ------------- 2.33s 2025-05-31 21:11:15.783385 | orchestrator | 2025-05-31 21:11:15 | INFO  | Task f2248baf-8912-4c2c-8aa6-44e3e92aedb2 is in state STARTED 2025-05-31 21:11:15.783395 | orchestrator | 2025-05-31 21:11:15 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:11:15.783404 | orchestrator | 2025-05-31 21:11:15 | INFO  | Task 73eed275-4f68-4766-824f-c892735403da is in state STARTED 2025-05-31 21:11:15.783414 | orchestrator | 2025-05-31 21:11:15 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:11:18.806304 | orchestrator | 2025-05-31 21:11:18 | INFO  | Task f2248baf-8912-4c2c-8aa6-44e3e92aedb2 is in state STARTED 2025-05-31 21:11:18.807739 | orchestrator | 2025-05-31 21:11:18 | INFO  | Task c55a287e-6394-4218-9805-cf406e738e90 is in state STARTED 2025-05-31 21:11:18.810358 | orchestrator | 2025-05-31 21:11:18 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:11:18.814790 | orchestrator | 2025-05-31 21:11:18 | INFO  | Task 73eed275-4f68-4766-824f-c892735403da is in state STARTED 2025-05-31 21:11:18.814834 | orchestrator | 2025-05-31 21:11:18 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:11:21.851555 | orchestrator | 2025-05-31 21:11:21 | INFO  | Task f2248baf-8912-4c2c-8aa6-44e3e92aedb2 is in state STARTED 2025-05-31 21:11:21.851700 | orchestrator | 2025-05-31 21:11:21 | INFO  | Task c55a287e-6394-4218-9805-cf406e738e90 is in state STARTED 2025-05-31 21:11:21.852142 | orchestrator | 2025-05-31 21:11:21 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:11:21.852669 | orchestrator | 2025-05-31 21:11:21 | INFO  | Task 73eed275-4f68-4766-824f-c892735403da is in state STARTED 2025-05-31 21:11:21.856114 | orchestrator | 2025-05-31 21:11:21 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:11:24.879495 | orchestrator | 2025-05-31 21:11:24 | INFO  | Task f2248baf-8912-4c2c-8aa6-44e3e92aedb2 is in state STARTED 2025-05-31 21:11:24.879730 | orchestrator | 2025-05-31 21:11:24 | INFO  | Task c55a287e-6394-4218-9805-cf406e738e90 is in state STARTED 2025-05-31 21:11:24.880489 | orchestrator | 2025-05-31 21:11:24 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:11:24.881303 | orchestrator | 2025-05-31 21:11:24 | INFO  | Task 73eed275-4f68-4766-824f-c892735403da is in state STARTED 2025-05-31 21:11:24.881352 | orchestrator | 2025-05-31 21:11:24 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:11:27.905365 | orchestrator | 2025-05-31 21:11:27 | INFO  | Task f2248baf-8912-4c2c-8aa6-44e3e92aedb2 is in state STARTED 2025-05-31 21:11:27.906406 | orchestrator | 2025-05-31 21:11:27 | INFO  | Task c55a287e-6394-4218-9805-cf406e738e90 is in state STARTED 2025-05-31 21:11:27.906919 | orchestrator | 2025-05-31 21:11:27 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:11:27.907926 | orchestrator | 2025-05-31 21:11:27 | INFO  | Task 73eed275-4f68-4766-824f-c892735403da is in state STARTED 2025-05-31 21:11:27.907999 | orchestrator | 2025-05-31 21:11:27 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:11:30.929812 | orchestrator | 2025-05-31 21:11:30 | INFO  | Task f2248baf-8912-4c2c-8aa6-44e3e92aedb2 is in state STARTED 2025-05-31 21:11:30.930470 | orchestrator | 2025-05-31 21:11:30 | INFO  | Task c55a287e-6394-4218-9805-cf406e738e90 is in state STARTED 2025-05-31 21:11:30.932284 | orchestrator | 2025-05-31 21:11:30 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:11:30.932999 | orchestrator | 2025-05-31 21:11:30 | INFO  | Task 73eed275-4f68-4766-824f-c892735403da is in state STARTED 2025-05-31 21:11:30.933015 | orchestrator | 2025-05-31 21:11:30 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:11:33.963221 | orchestrator | 2025-05-31 21:11:33 | INFO  | Task f2248baf-8912-4c2c-8aa6-44e3e92aedb2 is in state STARTED 2025-05-31 21:11:33.964546 | orchestrator | 2025-05-31 21:11:33 | INFO  | Task c55a287e-6394-4218-9805-cf406e738e90 is in state STARTED 2025-05-31 21:11:33.965157 | orchestrator | 2025-05-31 21:11:33 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:11:33.965934 | orchestrator | 2025-05-31 21:11:33 | INFO  | Task 73eed275-4f68-4766-824f-c892735403da is in state STARTED 2025-05-31 21:11:33.965952 | orchestrator | 2025-05-31 21:11:33 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:11:36.992748 | orchestrator | 2025-05-31 21:11:36 | INFO  | Task f2248baf-8912-4c2c-8aa6-44e3e92aedb2 is in state STARTED 2025-05-31 21:11:36.994010 | orchestrator | 2025-05-31 21:11:36 | INFO  | Task c55a287e-6394-4218-9805-cf406e738e90 is in state STARTED 2025-05-31 21:11:36.995440 | orchestrator | 2025-05-31 21:11:36 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:11:36.997080 | orchestrator | 2025-05-31 21:11:36 | INFO  | Task 73eed275-4f68-4766-824f-c892735403da is in state STARTED 2025-05-31 21:11:36.997146 | orchestrator | 2025-05-31 21:11:36 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:11:40.029006 | orchestrator | 2025-05-31 21:11:40 | INFO  | Task f2248baf-8912-4c2c-8aa6-44e3e92aedb2 is in state STARTED 2025-05-31 21:11:40.030326 | orchestrator | 2025-05-31 21:11:40 | INFO  | Task c55a287e-6394-4218-9805-cf406e738e90 is in state STARTED 2025-05-31 21:11:40.030951 | orchestrator | 2025-05-31 21:11:40 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:11:40.031613 | orchestrator | 2025-05-31 21:11:40 | INFO  | Task 73eed275-4f68-4766-824f-c892735403da is in state STARTED 2025-05-31 21:11:40.031655 | orchestrator | 2025-05-31 21:11:40 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:11:43.054782 | orchestrator | 2025-05-31 21:11:43 | INFO  | Task f2248baf-8912-4c2c-8aa6-44e3e92aedb2 is in state STARTED 2025-05-31 21:11:43.055148 | orchestrator | 2025-05-31 21:11:43 | INFO  | Task c55a287e-6394-4218-9805-cf406e738e90 is in state STARTED 2025-05-31 21:11:43.056003 | orchestrator | 2025-05-31 21:11:43 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:11:43.056659 | orchestrator | 2025-05-31 21:11:43 | INFO  | Task 73eed275-4f68-4766-824f-c892735403da is in state STARTED 2025-05-31 21:11:43.056791 | orchestrator | 2025-05-31 21:11:43 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:11:46.082847 | orchestrator | 2025-05-31 21:11:46 | INFO  | Task f2248baf-8912-4c2c-8aa6-44e3e92aedb2 is in state STARTED 2025-05-31 21:11:46.083250 | orchestrator | 2025-05-31 21:11:46 | INFO  | Task c55a287e-6394-4218-9805-cf406e738e90 is in state STARTED 2025-05-31 21:11:46.083997 | orchestrator | 2025-05-31 21:11:46 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:11:46.084697 | orchestrator | 2025-05-31 21:11:46 | INFO  | Task 73eed275-4f68-4766-824f-c892735403da is in state STARTED 2025-05-31 21:11:46.084729 | orchestrator | 2025-05-31 21:11:46 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:11:49.122168 | orchestrator | 2025-05-31 21:11:49 | INFO  | Task f2248baf-8912-4c2c-8aa6-44e3e92aedb2 is in state STARTED 2025-05-31 21:11:49.122403 | orchestrator | 2025-05-31 21:11:49 | INFO  | Task c55a287e-6394-4218-9805-cf406e738e90 is in state STARTED 2025-05-31 21:11:49.124289 | orchestrator | 2025-05-31 21:11:49 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:11:49.125291 | orchestrator | 2025-05-31 21:11:49 | INFO  | Task 73eed275-4f68-4766-824f-c892735403da is in state STARTED 2025-05-31 21:11:49.125314 | orchestrator | 2025-05-31 21:11:49 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:11:52.151064 | orchestrator | 2025-05-31 21:11:52 | INFO  | Task f2248baf-8912-4c2c-8aa6-44e3e92aedb2 is in state SUCCESS 2025-05-31 21:11:52.151759 | orchestrator | 2025-05-31 21:11:52.151796 | orchestrator | 2025-05-31 21:11:52.151809 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-31 21:11:52.151822 | orchestrator | 2025-05-31 21:11:52.151833 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-31 21:11:52.151964 | orchestrator | Saturday 31 May 2025 21:09:54 +0000 (0:00:00.336) 0:00:00.336 ********** 2025-05-31 21:11:52.151977 | orchestrator | ok: [testbed-node-0] 2025-05-31 21:11:52.151989 | orchestrator | ok: [testbed-node-1] 2025-05-31 21:11:52.152291 | orchestrator | ok: [testbed-node-2] 2025-05-31 21:11:52.152304 | orchestrator | 2025-05-31 21:11:52.152316 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-31 21:11:52.152327 | orchestrator | Saturday 31 May 2025 21:09:55 +0000 (0:00:00.526) 0:00:00.863 ********** 2025-05-31 21:11:52.152338 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-05-31 21:11:52.152349 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-05-31 21:11:52.152360 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-05-31 21:11:52.152371 | orchestrator | 2025-05-31 21:11:52.152382 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-05-31 21:11:52.152393 | orchestrator | 2025-05-31 21:11:52.152404 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-05-31 21:11:52.152478 | orchestrator | Saturday 31 May 2025 21:09:55 +0000 (0:00:00.491) 0:00:01.354 ********** 2025-05-31 21:11:52.152490 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 21:11:52.152502 | orchestrator | 2025-05-31 21:11:52.152513 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-05-31 21:11:52.152524 | orchestrator | Saturday 31 May 2025 21:09:56 +0000 (0:00:00.596) 0:00:01.951 ********** 2025-05-31 21:11:52.152552 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-05-31 21:11:52.152563 | orchestrator | 2025-05-31 21:11:52.152574 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-05-31 21:11:52.152611 | orchestrator | Saturday 31 May 2025 21:09:59 +0000 (0:00:03.223) 0:00:05.175 ********** 2025-05-31 21:11:52.152623 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-05-31 21:11:52.152634 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-05-31 21:11:52.152644 | orchestrator | 2025-05-31 21:11:52.152655 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-05-31 21:11:52.152666 | orchestrator | Saturday 31 May 2025 21:10:05 +0000 (0:00:06.190) 0:00:11.365 ********** 2025-05-31 21:11:52.152677 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-31 21:11:52.152688 | orchestrator | 2025-05-31 21:11:52.152699 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-05-31 21:11:52.152710 | orchestrator | Saturday 31 May 2025 21:10:08 +0000 (0:00:03.124) 0:00:14.490 ********** 2025-05-31 21:11:52.152720 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-31 21:11:52.152731 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-05-31 21:11:52.152742 | orchestrator | 2025-05-31 21:11:52.152753 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-05-31 21:11:52.152763 | orchestrator | Saturday 31 May 2025 21:10:12 +0000 (0:00:03.533) 0:00:18.023 ********** 2025-05-31 21:11:52.152875 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-31 21:11:52.152889 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-05-31 21:11:52.152900 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-05-31 21:11:52.152911 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-05-31 21:11:52.152922 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-05-31 21:11:52.152933 | orchestrator | 2025-05-31 21:11:52.152944 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-05-31 21:11:52.152955 | orchestrator | Saturday 31 May 2025 21:10:26 +0000 (0:00:14.450) 0:00:32.474 ********** 2025-05-31 21:11:52.152966 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-05-31 21:11:52.152976 | orchestrator | 2025-05-31 21:11:52.152987 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-05-31 21:11:52.153014 | orchestrator | Saturday 31 May 2025 21:10:31 +0000 (0:00:04.885) 0:00:37.359 ********** 2025-05-31 21:11:52.153030 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-31 21:11:52.153059 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-31 21:11:52.153082 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-31 21:11:52.153095 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-31 21:11:52.153115 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-31 21:11:52.153127 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-31 21:11:52.153149 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-31 21:11:52.153162 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-31 21:11:52.153180 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-31 21:11:52.153192 | orchestrator | 2025-05-31 21:11:52.153203 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-05-31 21:11:52.153214 | orchestrator | Saturday 31 May 2025 21:10:34 +0000 (0:00:02.655) 0:00:40.015 ********** 2025-05-31 21:11:52.153224 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-05-31 21:11:52.153235 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-05-31 21:11:52.153246 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-05-31 21:11:52.153256 | orchestrator | 2025-05-31 21:11:52.153267 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-05-31 21:11:52.153277 | orchestrator | Saturday 31 May 2025 21:10:35 +0000 (0:00:01.125) 0:00:41.141 ********** 2025-05-31 21:11:52.153288 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:11:52.153299 | orchestrator | 2025-05-31 21:11:52.153309 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-05-31 21:11:52.153320 | orchestrator | Saturday 31 May 2025 21:10:35 +0000 (0:00:00.128) 0:00:41.269 ********** 2025-05-31 21:11:52.153330 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:11:52.153341 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:11:52.153352 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:11:52.153363 | orchestrator | 2025-05-31 21:11:52.153373 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-05-31 21:11:52.153384 | orchestrator | Saturday 31 May 2025 21:10:35 +0000 (0:00:00.412) 0:00:41.682 ********** 2025-05-31 21:11:52.153395 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-31 21:11:52.153405 | orchestrator | 2025-05-31 21:11:52.153416 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-05-31 21:11:52.153427 | orchestrator | Saturday 31 May 2025 21:10:36 +0000 (0:00:00.867) 0:00:42.550 ********** 2025-05-31 21:11:52.153511 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-31 21:11:52.153538 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-31 21:11:52.153560 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-31 21:11:52.153573 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-31 21:11:52.153586 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-31 21:11:52.153604 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-31 21:11:52.153618 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-31 21:11:52.153645 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-31 21:11:52.153659 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-31 21:11:52.153671 | orchestrator | 2025-05-31 21:11:52.153683 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-05-31 21:11:52.153695 | orchestrator | Saturday 31 May 2025 21:10:41 +0000 (0:00:04.414) 0:00:46.965 ********** 2025-05-31 21:11:52.153709 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-31 21:11:52.153721 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-31 21:11:52.153739 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-31 21:11:52.153759 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:11:52.153779 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-31 21:11:52.153792 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-31 21:11:52.153804 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-31 21:11:52.153816 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:11:52.153831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-31 21:11:52.153842 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-31 21:11:52.153896 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-31 21:11:52.153915 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:11:52.153926 | orchestrator | 2025-05-31 21:11:52.153937 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-05-31 21:11:52.153948 | orchestrator | Saturday 31 May 2025 21:10:43 +0000 (0:00:01.825) 0:00:48.790 ********** 2025-05-31 21:11:52.153968 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-31 21:11:52.153980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-31 21:11:52.153992 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-31 21:11:52.154003 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:11:52.154014 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-31 21:11:52.154104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-31 21:11:52.154117 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-31 21:11:52.154128 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:11:52.154148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-31 21:11:52.154160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-31 21:11:52.154172 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-31 21:11:52.154183 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:11:52.154193 | orchestrator | 2025-05-31 21:11:52.154204 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-05-31 21:11:52.154215 | orchestrator | Saturday 31 May 2025 21:10:44 +0000 (0:00:01.552) 0:00:50.342 ********** 2025-05-31 21:11:52.154232 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-31 21:11:52.154256 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-31 21:11:52.154268 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-31 21:11:52.154279 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-31 21:11:52.154291 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-31 21:11:52.154335 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-31 21:11:52.154352 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-31 21:11:52.154370 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-31 21:11:52.154381 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-31 21:11:52.154392 | orchestrator | 2025-05-31 21:11:52.154404 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-05-31 21:11:52.154415 | orchestrator | Saturday 31 May 2025 21:10:48 +0000 (0:00:04.162) 0:00:54.505 ********** 2025-05-31 21:11:52.154426 | orchestrator | changed: [testbed-node-1] 2025-05-31 21:11:52.154437 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:11:52.154448 | orchestrator | changed: [testbed-node-2] 2025-05-31 21:11:52.154458 | orchestrator | 2025-05-31 21:11:52.154469 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-05-31 21:11:52.154480 | orchestrator | Saturday 31 May 2025 21:10:51 +0000 (0:00:02.533) 0:00:57.039 ********** 2025-05-31 21:11:52.154491 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-31 21:11:52.154502 | orchestrator | 2025-05-31 21:11:52.154513 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-05-31 21:11:52.154523 | orchestrator | Saturday 31 May 2025 21:10:53 +0000 (0:00:01.764) 0:00:58.804 ********** 2025-05-31 21:11:52.154534 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:11:52.154545 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:11:52.154556 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:11:52.154566 | orchestrator | 2025-05-31 21:11:52.154577 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-05-31 21:11:52.154588 | orchestrator | Saturday 31 May 2025 21:10:54 +0000 (0:00:01.115) 0:00:59.920 ********** 2025-05-31 21:11:52.154599 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-31 21:11:52.154622 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-31 21:11:52.154641 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-31 21:11:52.154653 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-31 21:11:52.154665 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-31 21:11:52.154683 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-31 21:11:52.154695 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-31 21:11:52.154711 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-31 21:11:52.154724 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-31 21:11:52.154735 | orchestrator | 2025-05-31 21:11:52.154746 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-05-31 21:11:52.154757 | orchestrator | Saturday 31 May 2025 21:11:03 +0000 (0:00:09.284) 0:01:09.204 ********** 2025-05-31 21:11:52.154776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-31 21:11:52.154788 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-31 21:11:52.154806 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-31 21:11:52.154817 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:11:52.154833 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-31 21:11:52.154845 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-31 21:11:52.155022 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-31 21:11:52.155035 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:11:52.155047 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-31 21:11:52.155067 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-31 21:11:52.155079 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-31 21:11:52.155090 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:11:52.155101 | orchestrator | 2025-05-31 21:11:52.155112 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-05-31 21:11:52.155123 | orchestrator | Saturday 31 May 2025 21:11:04 +0000 (0:00:01.063) 0:01:10.268 ********** 2025-05-31 21:11:52.155140 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-31 21:11:52.155159 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-31 21:11:52.155171 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-31 21:11:52.155189 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-31 21:11:52.155201 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-31 21:11:52.155216 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-31 21:11:52.155229 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-31 21:11:52.155248 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-31 21:11:52.155260 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-31 21:11:52.155283 | orchestrator | 2025-05-31 21:11:52.155294 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-05-31 21:11:52.155305 | orchestrator | Saturday 31 May 2025 21:11:07 +0000 (0:00:02.681) 0:01:12.949 ********** 2025-05-31 21:11:52.155314 | orchestrator | skipping: [testbed-node-0] 2025-05-31 21:11:52.155324 | orchestrator | skipping: [testbed-node-1] 2025-05-31 21:11:52.155333 | orchestrator | skipping: [testbed-node-2] 2025-05-31 21:11:52.155343 | orchestrator | 2025-05-31 21:11:52.155352 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-05-31 21:11:52.155362 | orchestrator | Saturday 31 May 2025 21:11:07 +0000 (0:00:00.273) 0:01:13.223 ********** 2025-05-31 21:11:52.155371 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:11:52.155381 | orchestrator | 2025-05-31 21:11:52.155390 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-05-31 21:11:52.155400 | orchestrator | Saturday 31 May 2025 21:11:09 +0000 (0:00:02.068) 0:01:15.292 ********** 2025-05-31 21:11:52.155410 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:11:52.155419 | orchestrator | 2025-05-31 21:11:52.155428 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-05-31 21:11:52.155438 | orchestrator | Saturday 31 May 2025 21:11:11 +0000 (0:00:02.197) 0:01:17.489 ********** 2025-05-31 21:11:52.155448 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:11:52.155457 | orchestrator | 2025-05-31 21:11:52.155467 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-05-31 21:11:52.155476 | orchestrator | Saturday 31 May 2025 21:11:22 +0000 (0:00:10.832) 0:01:28.322 ********** 2025-05-31 21:11:52.155486 | orchestrator | 2025-05-31 21:11:52.155495 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-05-31 21:11:52.155505 | orchestrator | Saturday 31 May 2025 21:11:22 +0000 (0:00:00.056) 0:01:28.379 ********** 2025-05-31 21:11:52.155514 | orchestrator | 2025-05-31 21:11:52.155524 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-05-31 21:11:52.155533 | orchestrator | Saturday 31 May 2025 21:11:22 +0000 (0:00:00.056) 0:01:28.435 ********** 2025-05-31 21:11:52.155543 | orchestrator | 2025-05-31 21:11:52.155552 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-05-31 21:11:52.155562 | orchestrator | Saturday 31 May 2025 21:11:22 +0000 (0:00:00.058) 0:01:28.493 ********** 2025-05-31 21:11:52.155572 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:11:52.155581 | orchestrator | changed: [testbed-node-1] 2025-05-31 21:11:52.155591 | orchestrator | changed: [testbed-node-2] 2025-05-31 21:11:52.155600 | orchestrator | 2025-05-31 21:11:52.155610 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-05-31 21:11:52.155619 | orchestrator | Saturday 31 May 2025 21:11:30 +0000 (0:00:07.445) 0:01:35.939 ********** 2025-05-31 21:11:52.155629 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:11:52.155638 | orchestrator | changed: [testbed-node-2] 2025-05-31 21:11:52.155648 | orchestrator | changed: [testbed-node-1] 2025-05-31 21:11:52.155657 | orchestrator | 2025-05-31 21:11:52.155722 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-05-31 21:11:52.155735 | orchestrator | Saturday 31 May 2025 21:11:40 +0000 (0:00:09.834) 0:01:45.773 ********** 2025-05-31 21:11:52.155745 | orchestrator | changed: [testbed-node-1] 2025-05-31 21:11:52.155754 | orchestrator | changed: [testbed-node-2] 2025-05-31 21:11:52.155764 | orchestrator | changed: [testbed-node-0] 2025-05-31 21:11:52.155773 | orchestrator | 2025-05-31 21:11:52.155782 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-31 21:11:52.155793 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-31 21:11:52.155846 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-31 21:11:52.155880 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-31 21:11:52.155890 | orchestrator | 2025-05-31 21:11:52.155899 | orchestrator | 2025-05-31 21:11:52.155909 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-31 21:11:52.155919 | orchestrator | Saturday 31 May 2025 21:11:49 +0000 (0:00:09.119) 0:01:54.893 ********** 2025-05-31 21:11:52.155928 | orchestrator | =============================================================================== 2025-05-31 21:11:52.155938 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 14.45s 2025-05-31 21:11:52.155954 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 10.83s 2025-05-31 21:11:52.155964 | orchestrator | barbican : Restart barbican-keystone-listener container ----------------- 9.83s 2025-05-31 21:11:52.155973 | orchestrator | barbican : Copying over barbican.conf ----------------------------------- 9.28s 2025-05-31 21:11:52.155983 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 9.12s 2025-05-31 21:11:52.155992 | orchestrator | barbican : Restart barbican-api container ------------------------------- 7.45s 2025-05-31 21:11:52.156002 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.19s 2025-05-31 21:11:52.156011 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.89s 2025-05-31 21:11:52.156021 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 4.41s 2025-05-31 21:11:52.156030 | orchestrator | barbican : Copying over config.json files for services ------------------ 4.16s 2025-05-31 21:11:52.156040 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 3.53s 2025-05-31 21:11:52.156049 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.22s 2025-05-31 21:11:52.156058 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.12s 2025-05-31 21:11:52.156068 | orchestrator | barbican : Check barbican containers ------------------------------------ 2.68s 2025-05-31 21:11:52.156077 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.66s 2025-05-31 21:11:52.156087 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.53s 2025-05-31 21:11:52.156096 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.20s 2025-05-31 21:11:52.156106 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.07s 2025-05-31 21:11:52.156115 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS certificate --- 1.83s 2025-05-31 21:11:52.156125 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 1.76s 2025-05-31 21:11:52.156134 | orchestrator | 2025-05-31 21:11:52 | INFO  | Task c55a287e-6394-4218-9805-cf406e738e90 is in state STARTED 2025-05-31 21:11:52.156144 | orchestrator | 2025-05-31 21:11:52 | INFO  | Task b3b86d0c-4b78-4919-9018-94950a396a9d is in state STARTED 2025-05-31 21:11:52.156153 | orchestrator | 2025-05-31 21:11:52 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:11:52.156163 | orchestrator | 2025-05-31 21:11:52 | INFO  | Task 73eed275-4f68-4766-824f-c892735403da is in state STARTED 2025-05-31 21:11:52.156173 | orchestrator | 2025-05-31 21:11:52 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:11:55.183595 | orchestrator | 2025-05-31 21:11:55 | INFO  | Task c55a287e-6394-4218-9805-cf406e738e90 is in state STARTED 2025-05-31 21:11:55.183841 | orchestrator | 2025-05-31 21:11:55 | INFO  | Task b3b86d0c-4b78-4919-9018-94950a396a9d is in state STARTED 2025-05-31 21:11:55.184307 | orchestrator | 2025-05-31 21:11:55 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:11:55.185011 | orchestrator | 2025-05-31 21:11:55 | INFO  | Task 73eed275-4f68-4766-824f-c892735403da is in state STARTED 2025-05-31 21:11:55.185045 | orchestrator | 2025-05-31 21:11:55 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:11:58.208748 | orchestrator | 2025-05-31 21:11:58 | INFO  | Task c55a287e-6394-4218-9805-cf406e738e90 is in state STARTED 2025-05-31 21:11:58.208963 | orchestrator | 2025-05-31 21:11:58 | INFO  | Task b3b86d0c-4b78-4919-9018-94950a396a9d is in state STARTED 2025-05-31 21:11:58.209301 | orchestrator | 2025-05-31 21:11:58 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:11:58.212181 | orchestrator | 2025-05-31 21:11:58 | INFO  | Task 73eed275-4f68-4766-824f-c892735403da is in state STARTED 2025-05-31 21:11:58.212211 | orchestrator | 2025-05-31 21:11:58 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:12:01.242723 | orchestrator | 2025-05-31 21:12:01 | INFO  | Task c55a287e-6394-4218-9805-cf406e738e90 is in state STARTED 2025-05-31 21:12:01.242841 | orchestrator | 2025-05-31 21:12:01 | INFO  | Task b3b86d0c-4b78-4919-9018-94950a396a9d is in state STARTED 2025-05-31 21:12:01.243428 | orchestrator | 2025-05-31 21:12:01 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:12:01.243803 | orchestrator | 2025-05-31 21:12:01 | INFO  | Task 73eed275-4f68-4766-824f-c892735403da is in state STARTED 2025-05-31 21:12:01.243823 | orchestrator | 2025-05-31 21:12:01 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:12:04.269167 | orchestrator | 2025-05-31 21:12:04 | INFO  | Task c55a287e-6394-4218-9805-cf406e738e90 is in state STARTED 2025-05-31 21:12:04.269835 | orchestrator | 2025-05-31 21:12:04 | INFO  | Task b3b86d0c-4b78-4919-9018-94950a396a9d is in state STARTED 2025-05-31 21:12:04.270683 | orchestrator | 2025-05-31 21:12:04 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:12:04.271391 | orchestrator | 2025-05-31 21:12:04 | INFO  | Task 73eed275-4f68-4766-824f-c892735403da is in state STARTED 2025-05-31 21:12:04.271423 | orchestrator | 2025-05-31 21:12:04 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:12:07.301928 | orchestrator | 2025-05-31 21:12:07 | INFO  | Task c55a287e-6394-4218-9805-cf406e738e90 is in state STARTED 2025-05-31 21:12:07.302443 | orchestrator | 2025-05-31 21:12:07 | INFO  | Task b3b86d0c-4b78-4919-9018-94950a396a9d is in state STARTED 2025-05-31 21:12:07.303082 | orchestrator | 2025-05-31 21:12:07 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:12:07.303837 | orchestrator | 2025-05-31 21:12:07 | INFO  | Task 73eed275-4f68-4766-824f-c892735403da is in state STARTED 2025-05-31 21:12:07.304011 | orchestrator | 2025-05-31 21:12:07 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:12:10.346918 | orchestrator | 2025-05-31 21:12:10 | INFO  | Task c55a287e-6394-4218-9805-cf406e738e90 is in state STARTED 2025-05-31 21:12:10.348620 | orchestrator | 2025-05-31 21:12:10 | INFO  | Task b3b86d0c-4b78-4919-9018-94950a396a9d is in state STARTED 2025-05-31 21:12:10.350933 | orchestrator | 2025-05-31 21:12:10 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:12:10.352256 | orchestrator | 2025-05-31 21:12:10 | INFO  | Task 73eed275-4f68-4766-824f-c892735403da is in state STARTED 2025-05-31 21:12:10.353016 | orchestrator | 2025-05-31 21:12:10 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:12:13.395556 | orchestrator | 2025-05-31 21:12:13 | INFO  | Task c55a287e-6394-4218-9805-cf406e738e90 is in state STARTED 2025-05-31 21:12:13.395693 | orchestrator | 2025-05-31 21:12:13 | INFO  | Task b3b86d0c-4b78-4919-9018-94950a396a9d is in state STARTED 2025-05-31 21:12:13.396364 | orchestrator | 2025-05-31 21:12:13 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:12:13.399987 | orchestrator | 2025-05-31 21:12:13 | INFO  | Task 73eed275-4f68-4766-824f-c892735403da is in state STARTED 2025-05-31 21:12:13.400085 | orchestrator | 2025-05-31 21:12:13 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:12:16.441604 | orchestrator | 2025-05-31 21:12:16 | INFO  | Task c55a287e-6394-4218-9805-cf406e738e90 is in state STARTED 2025-05-31 21:12:16.444132 | orchestrator | 2025-05-31 21:12:16 | INFO  | Task b3b86d0c-4b78-4919-9018-94950a396a9d is in state STARTED 2025-05-31 21:12:16.446613 | orchestrator | 2025-05-31 21:12:16 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:12:16.448982 | orchestrator | 2025-05-31 21:12:16 | INFO  | Task 73eed275-4f68-4766-824f-c892735403da is in state STARTED 2025-05-31 21:12:16.449007 | orchestrator | 2025-05-31 21:12:16 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:12:19.483247 | orchestrator | 2025-05-31 21:12:19 | INFO  | Task c55a287e-6394-4218-9805-cf406e738e90 is in state STARTED 2025-05-31 21:12:19.484088 | orchestrator | 2025-05-31 21:12:19 | INFO  | Task b3b86d0c-4b78-4919-9018-94950a396a9d is in state STARTED 2025-05-31 21:12:19.485072 | orchestrator | 2025-05-31 21:12:19 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:12:19.486411 | orchestrator | 2025-05-31 21:12:19 | INFO  | Task 73eed275-4f68-4766-824f-c892735403da is in state STARTED 2025-05-31 21:12:19.486466 | orchestrator | 2025-05-31 21:12:19 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:12:22.538913 | orchestrator | 2025-05-31 21:12:22 | INFO  | Task c55a287e-6394-4218-9805-cf406e738e90 is in state STARTED 2025-05-31 21:12:22.539387 | orchestrator | 2025-05-31 21:12:22 | INFO  | Task b3b86d0c-4b78-4919-9018-94950a396a9d is in state STARTED 2025-05-31 21:12:22.542191 | orchestrator | 2025-05-31 21:12:22 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:12:22.546542 | orchestrator | 2025-05-31 21:12:22 | INFO  | Task 73eed275-4f68-4766-824f-c892735403da is in state STARTED 2025-05-31 21:12:22.546578 | orchestrator | 2025-05-31 21:12:22 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:12:25.620776 | orchestrator | 2025-05-31 21:12:25 | INFO  | Task c55a287e-6394-4218-9805-cf406e738e90 is in state STARTED 2025-05-31 21:12:25.621519 | orchestrator | 2025-05-31 21:12:25 | INFO  | Task b3b86d0c-4b78-4919-9018-94950a396a9d is in state STARTED 2025-05-31 21:12:25.624957 | orchestrator | 2025-05-31 21:12:25 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:12:25.625563 | orchestrator | 2025-05-31 21:12:25 | INFO  | Task 73eed275-4f68-4766-824f-c892735403da is in state STARTED 2025-05-31 21:12:25.625587 | orchestrator | 2025-05-31 21:12:25 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:12:28.656458 | orchestrator | 2025-05-31 21:12:28 | INFO  | Task c55a287e-6394-4218-9805-cf406e738e90 is in state STARTED 2025-05-31 21:12:28.656558 | orchestrator | 2025-05-31 21:12:28 | INFO  | Task b3b86d0c-4b78-4919-9018-94950a396a9d is in state STARTED 2025-05-31 21:12:28.656575 | orchestrator | 2025-05-31 21:12:28 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:12:28.657510 | orchestrator | 2025-05-31 21:12:28 | INFO  | Task 73eed275-4f68-4766-824f-c892735403da is in state STARTED 2025-05-31 21:12:28.657572 | orchestrator | 2025-05-31 21:12:28 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:12:31.697669 | orchestrator | 2025-05-31 21:12:31 | INFO  | Task c55a287e-6394-4218-9805-cf406e738e90 is in state STARTED 2025-05-31 21:12:31.699242 | orchestrator | 2025-05-31 21:12:31 | INFO  | Task b3b86d0c-4b78-4919-9018-94950a396a9d is in state STARTED 2025-05-31 21:12:31.699290 | orchestrator | 2025-05-31 21:12:31 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:12:31.699304 | orchestrator | 2025-05-31 21:12:31 | INFO  | Task 73eed275-4f68-4766-824f-c892735403da is in state STARTED 2025-05-31 21:12:31.699315 | orchestrator | 2025-05-31 21:12:31 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:12:34.726170 | orchestrator | 2025-05-31 21:12:34 | INFO  | Task c55a287e-6394-4218-9805-cf406e738e90 is in state STARTED 2025-05-31 21:12:34.727675 | orchestrator | 2025-05-31 21:12:34 | INFO  | Task b3b86d0c-4b78-4919-9018-94950a396a9d is in state STARTED 2025-05-31 21:12:34.727752 | orchestrator | 2025-05-31 21:12:34 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:12:34.727767 | orchestrator | 2025-05-31 21:12:34 | INFO  | Task 73eed275-4f68-4766-824f-c892735403da is in state STARTED 2025-05-31 21:12:34.727778 | orchestrator | 2025-05-31 21:12:34 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:12:37.764412 | orchestrator | 2025-05-31 21:12:37 | INFO  | Task c55a287e-6394-4218-9805-cf406e738e90 is in state STARTED 2025-05-31 21:12:37.764494 | orchestrator | 2025-05-31 21:12:37 | INFO  | Task b3b86d0c-4b78-4919-9018-94950a396a9d is in state SUCCESS 2025-05-31 21:12:37.765255 | orchestrator | 2025-05-31 21:12:37 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:12:37.765351 | orchestrator | 2025-05-31 21:12:37 | INFO  | Task 73eed275-4f68-4766-824f-c892735403da is in state STARTED 2025-05-31 21:12:37.766145 | orchestrator | 2025-05-31 21:12:37 | INFO  | Task 271b6c72-ae69-4388-ae5b-82c162e46d2f is in state STARTED 2025-05-31 21:12:37.766158 | orchestrator | 2025-05-31 21:12:37 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:12:40.800224 | orchestrator | 2025-05-31 21:12:40 | INFO  | Task c55a287e-6394-4218-9805-cf406e738e90 is in state STARTED 2025-05-31 21:12:40.802236 | orchestrator | 2025-05-31 21:12:40 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:12:40.802350 | orchestrator | 2025-05-31 21:12:40 | INFO  | Task 73eed275-4f68-4766-824f-c892735403da is in state STARTED 2025-05-31 21:12:40.802368 | orchestrator | 2025-05-31 21:12:40 | INFO  | Task 271b6c72-ae69-4388-ae5b-82c162e46d2f is in state STARTED 2025-05-31 21:12:40.802380 | orchestrator | 2025-05-31 21:12:40 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:12:43.832932 | orchestrator | 2025-05-31 21:12:43 | INFO  | Task c55a287e-6394-4218-9805-cf406e738e90 is in state STARTED 2025-05-31 21:12:43.835204 | orchestrator | 2025-05-31 21:12:43 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:12:43.836255 | orchestrator | 2025-05-31 21:12:43 | INFO  | Task 73eed275-4f68-4766-824f-c892735403da is in state STARTED 2025-05-31 21:12:43.837443 | orchestrator | 2025-05-31 21:12:43 | INFO  | Task 271b6c72-ae69-4388-ae5b-82c162e46d2f is in state STARTED 2025-05-31 21:12:43.837497 | orchestrator | 2025-05-31 21:12:43 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:12:46.886351 | orchestrator | 2025-05-31 21:12:46 | INFO  | Task c55a287e-6394-4218-9805-cf406e738e90 is in state STARTED 2025-05-31 21:12:46.888231 | orchestrator | 2025-05-31 21:12:46 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:12:46.890503 | orchestrator | 2025-05-31 21:12:46 | INFO  | Task 73eed275-4f68-4766-824f-c892735403da is in state STARTED 2025-05-31 21:12:46.892631 | orchestrator | 2025-05-31 21:12:46 | INFO  | Task 271b6c72-ae69-4388-ae5b-82c162e46d2f is in state STARTED 2025-05-31 21:12:46.893235 | orchestrator | 2025-05-31 21:12:46 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:12:49.926696 | orchestrator | 2025-05-31 21:12:49 | INFO  | Task c55a287e-6394-4218-9805-cf406e738e90 is in state STARTED 2025-05-31 21:12:49.928112 | orchestrator | 2025-05-31 21:12:49 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:12:49.928159 | orchestrator | 2025-05-31 21:12:49 | INFO  | Task 73eed275-4f68-4766-824f-c892735403da is in state STARTED 2025-05-31 21:12:49.929291 | orchestrator | 2025-05-31 21:12:49 | INFO  | Task 271b6c72-ae69-4388-ae5b-82c162e46d2f is in state STARTED 2025-05-31 21:12:49.929323 | orchestrator | 2025-05-31 21:12:49 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:12:52.967421 | orchestrator | 2025-05-31 21:12:52 | INFO  | Task c55a287e-6394-4218-9805-cf406e738e90 is in state STARTED 2025-05-31 21:12:52.968380 | orchestrator | 2025-05-31 21:12:52 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:12:52.969664 | orchestrator | 2025-05-31 21:12:52 | INFO  | Task 73eed275-4f68-4766-824f-c892735403da is in state STARTED 2025-05-31 21:12:52.971098 | orchestrator | 2025-05-31 21:12:52 | INFO  | Task 271b6c72-ae69-4388-ae5b-82c162e46d2f is in state STARTED 2025-05-31 21:12:52.971127 | orchestrator | 2025-05-31 21:12:52 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:12:56.019956 | orchestrator | 2025-05-31 21:12:56 | INFO  | Task c55a287e-6394-4218-9805-cf406e738e90 is in state STARTED 2025-05-31 21:12:56.020288 | orchestrator | 2025-05-31 21:12:56 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:12:56.022896 | orchestrator | 2025-05-31 21:12:56 | INFO  | Task 73eed275-4f68-4766-824f-c892735403da is in state STARTED 2025-05-31 21:12:56.026567 | orchestrator | 2025-05-31 21:12:56 | INFO  | Task 271b6c72-ae69-4388-ae5b-82c162e46d2f is in state STARTED 2025-05-31 21:12:56.026605 | orchestrator | 2025-05-31 21:12:56 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:12:59.062549 | orchestrator | 2025-05-31 21:12:59 | INFO  | Task c55a287e-6394-4218-9805-cf406e738e90 is in state STARTED 2025-05-31 21:12:59.065890 | orchestrator | 2025-05-31 21:12:59 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:12:59.065953 | orchestrator | 2025-05-31 21:12:59 | INFO  | Task 73eed275-4f68-4766-824f-c892735403da is in state STARTED 2025-05-31 21:12:59.065965 | orchestrator | 2025-05-31 21:12:59 | INFO  | Task 271b6c72-ae69-4388-ae5b-82c162e46d2f is in state STARTED 2025-05-31 21:12:59.065977 | orchestrator | 2025-05-31 21:12:59 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:13:02.095244 | orchestrator | 2025-05-31 21:13:02 | INFO  | Task c55a287e-6394-4218-9805-cf406e738e90 is in state STARTED 2025-05-31 21:13:02.097317 | orchestrator | 2025-05-31 21:13:02 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:13:02.100392 | orchestrator | 2025-05-31 21:13:02 | INFO  | Task 73eed275-4f68-4766-824f-c892735403da is in state STARTED 2025-05-31 21:13:02.101885 | orchestrator | 2025-05-31 21:13:02 | INFO  | Task 271b6c72-ae69-4388-ae5b-82c162e46d2f is in state STARTED 2025-05-31 21:13:02.101975 | orchestrator | 2025-05-31 21:13:02 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:13:05.138342 | orchestrator | 2025-05-31 21:13:05 | INFO  | Task c55a287e-6394-4218-9805-cf406e738e90 is in state STARTED 2025-05-31 21:13:05.138436 | orchestrator | 2025-05-31 21:13:05 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:13:05.140224 | orchestrator | 2025-05-31 21:13:05 | INFO  | Task 73eed275-4f68-4766-824f-c892735403da is in state STARTED 2025-05-31 21:13:05.140293 | orchestrator | 2025-05-31 21:13:05 | INFO  | Task 271b6c72-ae69-4388-ae5b-82c162e46d2f is in state STARTED 2025-05-31 21:13:05.140307 | orchestrator | 2025-05-31 21:13:05 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:13:08.171034 | orchestrator | 2025-05-31 21:13:08 | INFO  | Task c55a287e-6394-4218-9805-cf406e738e90 is in state STARTED 2025-05-31 21:13:08.171667 | orchestrator | 2025-05-31 21:13:08 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:13:08.172580 | orchestrator | 2025-05-31 21:13:08 | INFO  | Task 73eed275-4f68-4766-824f-c892735403da is in state STARTED 2025-05-31 21:13:08.173796 | orchestrator | 2025-05-31 21:13:08 | INFO  | Task 271b6c72-ae69-4388-ae5b-82c162e46d2f is in state STARTED 2025-05-31 21:13:08.173911 | orchestrator | 2025-05-31 21:13:08 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:13:11.201120 | orchestrator | 2025-05-31 21:13:11 | INFO  | Task c55a287e-6394-4218-9805-cf406e738e90 is in state STARTED 2025-05-31 21:13:11.201219 | orchestrator | 2025-05-31 21:13:11 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:13:11.202151 | orchestrator | 2025-05-31 21:13:11 | INFO  | Task 73eed275-4f68-4766-824f-c892735403da is in state STARTED 2025-05-31 21:13:11.202729 | orchestrator | 2025-05-31 21:13:11 | INFO  | Task 271b6c72-ae69-4388-ae5b-82c162e46d2f is in state STARTED 2025-05-31 21:13:11.202763 | orchestrator | 2025-05-31 21:13:11 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:13:14.248355 | orchestrator | 2025-05-31 21:13:14 | INFO  | Task c55a287e-6394-4218-9805-cf406e738e90 is in state STARTED 2025-05-31 21:13:14.250206 | orchestrator | 2025-05-31 21:13:14 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:13:14.251207 | orchestrator | 2025-05-31 21:13:14 | INFO  | Task 73eed275-4f68-4766-824f-c892735403da is in state STARTED 2025-05-31 21:13:14.254998 | orchestrator | 2025-05-31 21:13:14 | INFO  | Task 271b6c72-ae69-4388-ae5b-82c162e46d2f is in state STARTED 2025-05-31 21:13:14.255242 | orchestrator | 2025-05-31 21:13:14 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:13:17.311130 | orchestrator | 2025-05-31 21:13:17 | INFO  | Task c55a287e-6394-4218-9805-cf406e738e90 is in state STARTED 2025-05-31 21:13:17.314606 | orchestrator | 2025-05-31 21:13:17 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:13:17.315764 | orchestrator | 2025-05-31 21:13:17 | INFO  | Task 73eed275-4f68-4766-824f-c892735403da is in state STARTED 2025-05-31 21:13:17.317382 | orchestrator | 2025-05-31 21:13:17 | INFO  | Task 271b6c72-ae69-4388-ae5b-82c162e46d2f is in state STARTED 2025-05-31 21:13:17.317498 | orchestrator | 2025-05-31 21:13:17 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:13:20.353536 | orchestrator | 2025-05-31 21:13:20 | INFO  | Task c55a287e-6394-4218-9805-cf406e738e90 is in state STARTED 2025-05-31 21:13:20.353660 | orchestrator | 2025-05-31 21:13:20 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:13:20.353676 | orchestrator | 2025-05-31 21:13:20 | INFO  | Task 73eed275-4f68-4766-824f-c892735403da is in state STARTED 2025-05-31 21:13:20.353712 | orchestrator | 2025-05-31 21:13:20 | INFO  | Task 271b6c72-ae69-4388-ae5b-82c162e46d2f is in state STARTED 2025-05-31 21:13:20.353723 | orchestrator | 2025-05-31 21:13:20 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:13:23.385954 | orchestrator | 2025-05-31 21:13:23 | INFO  | Task c55a287e-6394-4218-9805-cf406e738e90 is in state STARTED 2025-05-31 21:13:23.387720 | orchestrator | 2025-05-31 21:13:23 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:13:23.388746 | orchestrator | 2025-05-31 21:13:23 | INFO  | Task 73eed275-4f68-4766-824f-c892735403da is in state STARTED 2025-05-31 21:13:23.390769 | orchestrator | 2025-05-31 21:13:23 | INFO  | Task 271b6c72-ae69-4388-ae5b-82c162e46d2f is in state STARTED 2025-05-31 21:13:23.390831 | orchestrator | 2025-05-31 21:13:23 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:13:26.426935 | orchestrator | 2025-05-31 21:13:26 | INFO  | Task c55a287e-6394-4218-9805-cf406e738e90 is in state STARTED 2025-05-31 21:13:26.430394 | orchestrator | 2025-05-31 21:13:26 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:13:26.432461 | orchestrator | 2025-05-31 21:13:26 | INFO  | Task 73eed275-4f68-4766-824f-c892735403da is in state STARTED 2025-05-31 21:13:26.434271 | orchestrator | 2025-05-31 21:13:26 | INFO  | Task 271b6c72-ae69-4388-ae5b-82c162e46d2f is in state STARTED 2025-05-31 21:13:26.434324 | orchestrator | 2025-05-31 21:13:26 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:13:29.465759 | orchestrator | 2025-05-31 21:13:29 | INFO  | Task c55a287e-6394-4218-9805-cf406e738e90 is in state STARTED 2025-05-31 21:13:29.466140 | orchestrator | 2025-05-31 21:13:29 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:13:29.468549 | orchestrator | 2025-05-31 21:13:29 | INFO  | Task 73eed275-4f68-4766-824f-c892735403da is in state STARTED 2025-05-31 21:13:29.468624 | orchestrator | 2025-05-31 21:13:29 | INFO  | Task 271b6c72-ae69-4388-ae5b-82c162e46d2f is in state STARTED 2025-05-31 21:13:29.468639 | orchestrator | 2025-05-31 21:13:29 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:13:32.508418 | orchestrator | 2025-05-31 21:13:32 | INFO  | Task c55a287e-6394-4218-9805-cf406e738e90 is in state STARTED 2025-05-31 21:13:32.510461 | orchestrator | 2025-05-31 21:13:32 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:13:32.513455 | orchestrator | 2025-05-31 21:13:32 | INFO  | Task 73eed275-4f68-4766-824f-c892735403da is in state STARTED 2025-05-31 21:13:32.515795 | orchestrator | 2025-05-31 21:13:32 | INFO  | Task 271b6c72-ae69-4388-ae5b-82c162e46d2f is in state STARTED 2025-05-31 21:13:32.516080 | orchestrator | 2025-05-31 21:13:32 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:13:35.559000 | orchestrator | 2025-05-31 21:13:35 | INFO  | Task c55a287e-6394-4218-9805-cf406e738e90 is in state STARTED 2025-05-31 21:13:35.560488 | orchestrator | 2025-05-31 21:13:35 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:13:35.562199 | orchestrator | 2025-05-31 21:13:35 | INFO  | Task 73eed275-4f68-4766-824f-c892735403da is in state STARTED 2025-05-31 21:13:35.566055 | orchestrator | 2025-05-31 21:13:35 | INFO  | Task 271b6c72-ae69-4388-ae5b-82c162e46d2f is in state STARTED 2025-05-31 21:13:35.566093 | orchestrator | 2025-05-31 21:13:35 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:13:38.593605 | orchestrator | 2025-05-31 21:13:38 | INFO  | Task c55a287e-6394-4218-9805-cf406e738e90 is in state STARTED 2025-05-31 21:13:38.594276 | orchestrator | 2025-05-31 21:13:38 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:13:38.594950 | orchestrator | 2025-05-31 21:13:38 | INFO  | Task 73eed275-4f68-4766-824f-c892735403da is in state STARTED 2025-05-31 21:13:38.595733 | orchestrator | 2025-05-31 21:13:38 | INFO  | Task 271b6c72-ae69-4388-ae5b-82c162e46d2f is in state STARTED 2025-05-31 21:13:38.595811 | orchestrator | 2025-05-31 21:13:38 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:13:41.632337 | orchestrator | 2025-05-31 21:13:41 | INFO  | Task c55a287e-6394-4218-9805-cf406e738e90 is in state STARTED 2025-05-31 21:13:41.634212 | orchestrator | 2025-05-31 21:13:41 | INFO  | Task 838d4119-69f0-47d6-82bf-363acc7214f6 is in state STARTED 2025-05-31 21:13:41.635769 | orchestrator | 2025-05-31 21:13:41 | INFO  | Task 73eed275-4f68-4766-824f-c892735403da is in state STARTED 2025-05-31 21:13:41.637413 | orchestrator | 2025-05-31 21:13:41 | INFO  | Task 271b6c72-ae69-4388-ae5b-82c162e46d2f is in state STARTED 2025-05-31 21:13:41.637544 | orchestrator | 2025-05-31 21:13:41 | INFO  | Wait 1 second(s) until the next check 2025-05-31 21:13:42.957114 | RUN END RESULT_TIMED_OUT: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-05-31 21:13:42.959131 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-05-31 21:13:43.703881 | 2025-05-31 21:13:43.704176 | PLAY [Post output play] 2025-05-31 21:13:43.720164 | 2025-05-31 21:13:43.720306 | LOOP [stage-output : Register sources] 2025-05-31 21:13:43.790213 | 2025-05-31 21:13:43.790529 | TASK [stage-output : Check sudo] 2025-05-31 21:13:44.659334 | orchestrator | sudo: a password is required 2025-05-31 21:13:44.830379 | orchestrator | ok: Runtime: 0:00:00.010516 2025-05-31 21:13:44.846936 | 2025-05-31 21:13:44.847109 | LOOP [stage-output : Set source and destination for files and folders] 2025-05-31 21:13:44.888942 | 2025-05-31 21:13:44.889242 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-05-31 21:13:44.957042 | orchestrator | ok 2025-05-31 21:13:44.966048 | 2025-05-31 21:13:44.966181 | LOOP [stage-output : Ensure target folders exist] 2025-05-31 21:13:45.463601 | orchestrator | ok: "docs" 2025-05-31 21:13:45.463995 | 2025-05-31 21:13:45.741933 | orchestrator | ok: "artifacts" 2025-05-31 21:13:46.005070 | orchestrator | ok: "logs" 2025-05-31 21:13:46.024943 | 2025-05-31 21:13:46.025149 | LOOP [stage-output : Copy files and folders to staging folder] 2025-05-31 21:13:46.064198 | 2025-05-31 21:13:46.064452 | TASK [stage-output : Make all log files readable] 2025-05-31 21:13:46.370703 | orchestrator | ok 2025-05-31 21:13:46.378559 | 2025-05-31 21:13:46.378684 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-05-31 21:13:46.423805 | orchestrator | skipping: Conditional result was False 2025-05-31 21:13:46.440425 | 2025-05-31 21:13:46.440682 | TASK [stage-output : Discover log files for compression] 2025-05-31 21:13:46.467557 | orchestrator | skipping: Conditional result was False 2025-05-31 21:13:46.479641 | 2025-05-31 21:13:46.479859 | LOOP [stage-output : Archive everything from logs] 2025-05-31 21:13:46.530203 | 2025-05-31 21:13:46.530449 | PLAY [Post cleanup play] 2025-05-31 21:13:46.543137 | 2025-05-31 21:13:46.543292 | TASK [Set cloud fact (Zuul deployment)] 2025-05-31 21:13:46.594194 | orchestrator | ok 2025-05-31 21:13:46.605046 | 2025-05-31 21:13:46.605275 | TASK [Set cloud fact (local deployment)] 2025-05-31 21:13:46.630066 | orchestrator | skipping: Conditional result was False 2025-05-31 21:13:46.646904 | 2025-05-31 21:13:46.647046 | TASK [Clean the cloud environment] 2025-05-31 21:13:47.485520 | orchestrator | 2025-05-31 21:13:47 - clean up servers 2025-05-31 21:13:48.227578 | orchestrator | 2025-05-31 21:13:48 - testbed-manager 2025-05-31 21:13:48.308341 | orchestrator | 2025-05-31 21:13:48 - testbed-node-4 2025-05-31 21:13:48.397106 | orchestrator | 2025-05-31 21:13:48 - testbed-node-5 2025-05-31 21:13:48.477619 | orchestrator | 2025-05-31 21:13:48 - testbed-node-3 2025-05-31 21:13:48.566416 | orchestrator | 2025-05-31 21:13:48 - testbed-node-0 2025-05-31 21:13:48.656319 | orchestrator | 2025-05-31 21:13:48 - testbed-node-2 2025-05-31 21:13:48.746673 | orchestrator | 2025-05-31 21:13:48 - testbed-node-1 2025-05-31 21:13:48.822543 | orchestrator | 2025-05-31 21:13:48 - clean up keypairs 2025-05-31 21:13:48.840934 | orchestrator | 2025-05-31 21:13:48 - testbed 2025-05-31 21:13:48.863070 | orchestrator | 2025-05-31 21:13:48 - wait for servers to be gone 2025-05-31 21:13:59.769940 | orchestrator | 2025-05-31 21:13:59 - clean up ports 2025-05-31 21:13:59.945937 | orchestrator | 2025-05-31 21:13:59 - 0e87a94f-a575-429f-bba7-ff0686505779 2025-05-31 21:14:00.185700 | orchestrator | 2025-05-31 21:14:00 - 1fdad407-5a05-46ac-b891-924f89da8f7e 2025-05-31 21:14:00.468566 | orchestrator | 2025-05-31 21:14:00 - 5077ea57-5eac-493d-9758-9ee65afdd45c 2025-05-31 21:14:00.671163 | orchestrator | 2025-05-31 21:14:00 - 5e48f730-4dc7-4539-a441-a4e9bf612ccb 2025-05-31 21:14:00.878071 | orchestrator | 2025-05-31 21:14:00 - c4494f32-79c3-450d-8989-6e387509b694 2025-05-31 21:14:01.073894 | orchestrator | 2025-05-31 21:14:01 - f9ba7197-867d-4253-9019-e1706212b0e6 2025-05-31 21:14:01.270482 | orchestrator | 2025-05-31 21:14:01 - ffd8c546-b409-458b-bdf3-86620c19fcd5 2025-05-31 21:14:01.624400 | orchestrator | 2025-05-31 21:14:01 - clean up volumes 2025-05-31 21:14:01.732502 | orchestrator | 2025-05-31 21:14:01 - testbed-volume-5-node-base 2025-05-31 21:14:01.770582 | orchestrator | 2025-05-31 21:14:01 - testbed-volume-0-node-base 2025-05-31 21:14:01.807799 | orchestrator | 2025-05-31 21:14:01 - testbed-volume-1-node-base 2025-05-31 21:14:01.851964 | orchestrator | 2025-05-31 21:14:01 - testbed-volume-4-node-base 2025-05-31 21:14:01.895634 | orchestrator | 2025-05-31 21:14:01 - testbed-volume-manager-base 2025-05-31 21:14:01.937629 | orchestrator | 2025-05-31 21:14:01 - testbed-volume-2-node-base 2025-05-31 21:14:01.977921 | orchestrator | 2025-05-31 21:14:01 - testbed-volume-3-node-base 2025-05-31 21:14:02.018206 | orchestrator | 2025-05-31 21:14:02 - testbed-volume-3-node-3 2025-05-31 21:14:02.057801 | orchestrator | 2025-05-31 21:14:02 - testbed-volume-8-node-5 2025-05-31 21:14:02.098292 | orchestrator | 2025-05-31 21:14:02 - testbed-volume-7-node-4 2025-05-31 21:14:02.138726 | orchestrator | 2025-05-31 21:14:02 - testbed-volume-4-node-4 2025-05-31 21:14:02.180185 | orchestrator | 2025-05-31 21:14:02 - testbed-volume-5-node-5 2025-05-31 21:14:02.219401 | orchestrator | 2025-05-31 21:14:02 - testbed-volume-2-node-5 2025-05-31 21:14:02.262559 | orchestrator | 2025-05-31 21:14:02 - testbed-volume-0-node-3 2025-05-31 21:14:02.304226 | orchestrator | 2025-05-31 21:14:02 - testbed-volume-1-node-4 2025-05-31 21:14:02.342503 | orchestrator | 2025-05-31 21:14:02 - testbed-volume-6-node-3 2025-05-31 21:14:02.381444 | orchestrator | 2025-05-31 21:14:02 - disconnect routers 2025-05-31 21:14:02.444631 | orchestrator | 2025-05-31 21:14:02 - testbed 2025-05-31 21:14:03.669609 | orchestrator | 2025-05-31 21:14:03 - clean up subnets 2025-05-31 21:14:03.722568 | orchestrator | 2025-05-31 21:14:03 - subnet-testbed-management 2025-05-31 21:14:03.896479 | orchestrator | 2025-05-31 21:14:03 - clean up networks 2025-05-31 21:14:04.041812 | orchestrator | 2025-05-31 21:14:04 - net-testbed-management 2025-05-31 21:14:04.302712 | orchestrator | 2025-05-31 21:14:04 - clean up security groups 2025-05-31 21:14:04.350596 | orchestrator | 2025-05-31 21:14:04 - testbed-management 2025-05-31 21:14:04.459760 | orchestrator | 2025-05-31 21:14:04 - testbed-node 2025-05-31 21:14:04.571682 | orchestrator | 2025-05-31 21:14:04 - clean up floating ips 2025-05-31 21:14:04.609992 | orchestrator | 2025-05-31 21:14:04 - 81.163.193.24 2025-05-31 21:14:04.966258 | orchestrator | 2025-05-31 21:14:04 - clean up routers 2025-05-31 21:14:05.026243 | orchestrator | 2025-05-31 21:14:05 - testbed 2025-05-31 21:14:06.202783 | orchestrator | ok: Runtime: 0:00:18.804425 2025-05-31 21:14:06.207243 | 2025-05-31 21:14:06.207405 | PLAY RECAP 2025-05-31 21:14:06.207532 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-05-31 21:14:06.207627 | 2025-05-31 21:14:06.372852 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-05-31 21:14:06.373835 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-05-31 21:14:07.109627 | 2025-05-31 21:14:07.109788 | PLAY [Cleanup play] 2025-05-31 21:14:07.126314 | 2025-05-31 21:14:07.126450 | TASK [Set cloud fact (Zuul deployment)] 2025-05-31 21:14:07.195371 | orchestrator | ok 2025-05-31 21:14:07.204747 | 2025-05-31 21:14:07.204978 | TASK [Set cloud fact (local deployment)] 2025-05-31 21:14:07.240179 | orchestrator | skipping: Conditional result was False 2025-05-31 21:14:07.255420 | 2025-05-31 21:14:07.255562 | TASK [Clean the cloud environment] 2025-05-31 21:14:08.364897 | orchestrator | 2025-05-31 21:14:08 - clean up servers 2025-05-31 21:14:08.844428 | orchestrator | 2025-05-31 21:14:08 - clean up keypairs 2025-05-31 21:14:08.863977 | orchestrator | 2025-05-31 21:14:08 - wait for servers to be gone 2025-05-31 21:14:08.907292 | orchestrator | 2025-05-31 21:14:08 - clean up ports 2025-05-31 21:14:08.981356 | orchestrator | 2025-05-31 21:14:08 - clean up volumes 2025-05-31 21:14:09.040245 | orchestrator | 2025-05-31 21:14:09 - disconnect routers 2025-05-31 21:14:09.070062 | orchestrator | 2025-05-31 21:14:09 - clean up subnets 2025-05-31 21:14:09.091070 | orchestrator | 2025-05-31 21:14:09 - clean up networks 2025-05-31 21:14:09.644026 | orchestrator | 2025-05-31 21:14:09 - clean up security groups 2025-05-31 21:14:09.679470 | orchestrator | 2025-05-31 21:14:09 - clean up floating ips 2025-05-31 21:14:09.719205 | orchestrator | 2025-05-31 21:14:09 - clean up routers 2025-05-31 21:14:10.293141 | orchestrator | ok: Runtime: 0:00:01.721907 2025-05-31 21:14:10.297333 | 2025-05-31 21:14:10.297514 | PLAY RECAP 2025-05-31 21:14:10.297648 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-05-31 21:14:10.297714 | 2025-05-31 21:14:10.457714 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-05-31 21:14:10.458725 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-05-31 21:14:11.221908 | 2025-05-31 21:14:11.222081 | PLAY [Base post-fetch] 2025-05-31 21:14:11.239316 | 2025-05-31 21:14:11.239466 | TASK [fetch-output : Set log path for multiple nodes] 2025-05-31 21:14:11.295400 | orchestrator | skipping: Conditional result was False 2025-05-31 21:14:11.309511 | 2025-05-31 21:14:11.309714 | TASK [fetch-output : Set log path for single node] 2025-05-31 21:14:11.349015 | orchestrator | ok 2025-05-31 21:14:11.358003 | 2025-05-31 21:14:11.358146 | LOOP [fetch-output : Ensure local output dirs] 2025-05-31 21:14:11.842358 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/4f3fe0e9d0624a79acaf86d3d81ffecd/work/logs" 2025-05-31 21:14:12.115079 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/4f3fe0e9d0624a79acaf86d3d81ffecd/work/artifacts" 2025-05-31 21:14:12.399646 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/4f3fe0e9d0624a79acaf86d3d81ffecd/work/docs" 2025-05-31 21:14:12.421847 | 2025-05-31 21:14:12.422069 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-05-31 21:14:13.476550 | orchestrator | changed: .d..t...... ./ 2025-05-31 21:14:13.476912 | orchestrator | changed: All items complete 2025-05-31 21:14:13.476971 | 2025-05-31 21:14:14.219014 | orchestrator | changed: .d..t...... ./ 2025-05-31 21:14:14.985426 | orchestrator | changed: .d..t...... ./ 2025-05-31 21:14:15.003649 | 2025-05-31 21:14:15.003789 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-05-31 21:14:15.032153 | orchestrator | skipping: Conditional result was False 2025-05-31 21:14:15.035198 | orchestrator | skipping: Conditional result was False 2025-05-31 21:14:15.058126 | 2025-05-31 21:14:15.058270 | PLAY RECAP 2025-05-31 21:14:15.058385 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-05-31 21:14:15.058464 | 2025-05-31 21:14:15.186118 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-05-31 21:14:15.188531 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-05-31 21:14:15.939618 | 2025-05-31 21:14:15.939795 | PLAY [Base post] 2025-05-31 21:14:15.955329 | 2025-05-31 21:14:15.955487 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-05-31 21:14:16.951309 | orchestrator | changed 2025-05-31 21:14:16.962113 | 2025-05-31 21:14:16.962278 | PLAY RECAP 2025-05-31 21:14:16.962391 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-05-31 21:14:16.962475 | 2025-05-31 21:14:17.087463 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-05-31 21:14:17.088447 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-05-31 21:14:17.895536 | 2025-05-31 21:14:17.895721 | PLAY [Base post-logs] 2025-05-31 21:14:17.906772 | 2025-05-31 21:14:17.906966 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-05-31 21:14:18.359834 | localhost | changed 2025-05-31 21:14:18.369718 | 2025-05-31 21:14:18.369908 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-05-31 21:14:18.405653 | localhost | ok 2025-05-31 21:14:18.408886 | 2025-05-31 21:14:18.408994 | TASK [Set zuul-log-path fact] 2025-05-31 21:14:18.424229 | localhost | ok 2025-05-31 21:14:18.432209 | 2025-05-31 21:14:18.432323 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-05-31 21:14:18.467481 | localhost | ok 2025-05-31 21:14:18.470533 | 2025-05-31 21:14:18.470639 | TASK [upload-logs : Create log directories] 2025-05-31 21:14:18.990799 | localhost | changed 2025-05-31 21:14:18.995959 | 2025-05-31 21:14:18.996135 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-05-31 21:14:19.488352 | localhost -> localhost | ok: Runtime: 0:00:00.007201 2025-05-31 21:14:19.492348 | 2025-05-31 21:14:19.492456 | TASK [upload-logs : Upload logs to log server] 2025-05-31 21:14:20.048942 | localhost | Output suppressed because no_log was given 2025-05-31 21:14:20.052030 | 2025-05-31 21:14:20.052196 | LOOP [upload-logs : Compress console log and json output] 2025-05-31 21:14:20.120104 | localhost | skipping: Conditional result was False 2025-05-31 21:14:20.125139 | localhost | skipping: Conditional result was False 2025-05-31 21:14:20.133064 | 2025-05-31 21:14:20.133293 | LOOP [upload-logs : Upload compressed console log and json output] 2025-05-31 21:14:20.181994 | localhost | skipping: Conditional result was False 2025-05-31 21:14:20.182709 | 2025-05-31 21:14:20.186031 | localhost | skipping: Conditional result was False 2025-05-31 21:14:20.198382 | 2025-05-31 21:14:20.198575 | LOOP [upload-logs : Upload console log and json output]